2017-03-09 08:21:53 |
Zhipeng Shen |
bug |
|
|
added bug |
2017-03-09 08:31:41 |
Zhipeng Shen |
description |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node: tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT solutions , allocates dynamic
#vlan segment for the existing mechanism driver "openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan ----------- There is not matched physical bridge at the time. As a
result, the tap port can not been set any vlan tag.
Eventually, br-ens33, the physical bridge, drops rarp
packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with compute2_physicnet1
for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0 driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from neutron-server
to set correct vlan tag for vm port and adds relevant flow for ovs bridges that nova notifies neutron-server the
event that port changes binding_hostid in ml2 hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT solutions , allocates dynamic
#vlan segment for the existing mechanism driver "openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan ----------- There is not matched physical bridge at the time. As a
result, the tap port can not been set any vlan tag.
Eventually, br-ens33, the physical bridge, drops rarp
packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with compute2_physicnet1
for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0 driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from neutron-server
to set correct vlan tag for vm port and adds relevant flow for ovs bridges that nova notifies neutron-server the
event that port changes binding_hostid in ml2 hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
|
2017-03-09 08:31:59 |
Zhipeng Shen |
description |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT solutions , allocates dynamic
#vlan segment for the existing mechanism driver "openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan ----------- There is not matched physical bridge at the time. As a
result, the tap port can not been set any vlan tag.
Eventually, br-ens33, the physical bridge, drops rarp
packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with compute2_physicnet1
for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0 driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from neutron-server
to set correct vlan tag for vm port and adds relevant flow for ovs bridges that nova notifies neutron-server the
event that port changes binding_hostid in ml2 hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT solutions , allocates dynamic
#vlan segment for the existing mechanism driver "openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan ----------- There is not matched physical bridge at the time. As a
result, the tap port can not been set any vlan tag.
Eventually, br-ens33, the physical bridge, drops rarp
packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with compute2_physicnet1
for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0 driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from neutron-server
to set correct vlan tag for vm port and adds relevant flow for ovs bridges that nova notifies neutron-server the
event that port changes binding_hostid in ml2 hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
|
2017-03-09 08:33:12 |
Zhipeng Shen |
description |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT solutions , allocates dynamic
#vlan segment for the existing mechanism driver "openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan ----------- There is not matched physical bridge at the time. As a
result, the tap port can not been set any vlan tag.
Eventually, br-ens33, the physical bridge, drops rarp
packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with compute2_physicnet1
for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0 driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from neutron-server
to set correct vlan tag for vm port and adds relevant flow for ovs bridges that nova notifies neutron-server the
event that port changes binding_hostid in ml2 hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan ----------- There is not matched physical bridge at the time. As a
result, the tap port can not been set any vlan tag.
Eventually, br-ens33, the physical bridge, drops rarp
packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with compute2_physicnet1
for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0 driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from neutron-server
to set correct vlan tag for vm port and adds relevant flow for ovs bridges that nova notifies neutron-server the
event that port changes binding_hostid in ml2 hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
|
2017-03-09 08:38:12 |
Zhipeng Shen |
description |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan ----------- There is not matched physical bridge at the time. As a
result, the tap port can not been set any vlan tag.
Eventually, br-ens33, the physical bridge, drops rarp
packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with compute2_physicnet1
for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0 driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from neutron-server
to set correct vlan tag for vm port and adds relevant flow for ovs bridges that nova notifies neutron-server the
event that port changes binding_hostid in ml2 hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify
destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that
the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan --- There is not matched physical bridge at
the time. As a result, the tap port can not been set any
vlan tag.Eventually, br-ens33, the physical bridge, drops
rarp packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with compute2_physicnet1
for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0 driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from neutron-server
to set correct vlan tag for vm port and adds relevant flow for ovs bridges that nova notifies neutron-server the
event that port changes binding_hostid in ml2 hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
|
2017-03-09 08:39:54 |
Zhipeng Shen |
description |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify
destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that
the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan --- There is not matched physical bridge at
the time. As a result, the tap port can not been set any
vlan tag.Eventually, br-ens33, the physical bridge, drops
rarp packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with compute2_physicnet1
for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0 driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from neutron-server
to set correct vlan tag for vm port and adds relevant flow for ovs bridges that nova notifies neutron-server the
event that port changes binding_hostid in ml2 hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify
destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that
the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan --- There is not matched physical bridge at
the time. As a result, the tap port can not been set any
vlan tag.Eventually, br-ens33, the physical bridge,
drops rarp packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with compute2_physicnet1
for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0 driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from neutron-server
to set correct vlan tag for vm port and adds relevant flow for ovs bridges that nova notifies neutron-server the
event that port changes binding_hostid in ml2 hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
|
2017-03-09 08:40:47 |
Zhipeng Shen |
description |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify
destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that
the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan --- There is not matched physical bridge at
the time. As a result, the tap port can not been set any
vlan tag.Eventually, br-ens33, the physical bridge,
drops rarp packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with compute2_physicnet1
for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0 driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from neutron-server
to set correct vlan tag for vm port and adds relevant flow for ovs bridges that nova notifies neutron-server the
event that port changes binding_hostid in ml2 hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify
destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that
the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan --- There is not matched physical bridge at
the time. As a result, the tap port can not been set any
vlan tag.Eventually, br-ens33, the physical bridge,
drops rarp packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with compute2_physicnet1
for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0 driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from neutron-server
to set correct vlan tag for vm port and adds relevant flow for ovs bridges that nova notifies neutron-server the
event that port changes binding_hostid in ml2 hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
|
2017-03-09 08:42:11 |
Zhipeng Shen |
description |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify
destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that
the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan --- There is not matched physical bridge at
the time. As a result, the tap port can not been set any
vlan tag.Eventually, br-ens33, the physical bridge,
drops rarp packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with compute2_physicnet1
for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0 driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from neutron-server
to set correct vlan tag for vm port and adds relevant flow for ovs bridges that nova notifies neutron-server the
event that port changes binding_hostid in ml2 hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify
destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that
the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan --- There is not matched physical bridge at
the time. As a result, the tap port can not been set any
vlan tag.Eventually, br-ens33, the physical bridge,
drops rarp packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with compute2_physicnet1
for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0 driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from neutron-server
to set correct vlan tag for vm port and adds relevant flow for ovs bridges that nova notifies neutron-server the
event that port changes binding_hostid in ml2 hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
|
2017-03-09 08:42:54 |
Zhipeng Shen |
description |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify
destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that
the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan --- There is not matched physical bridge at
the time. As a result, the tap port can not been set any
vlan tag.Eventually, br-ens33, the physical bridge,
drops rarp packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with compute2_physicnet1
for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0 driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from neutron-server
to set correct vlan tag for vm port and adds relevant flow for ovs bridges that nova notifies neutron-server the
event that port changes binding_hostid in ml2 hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify
destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that
the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan --- There is not matched physical bridge at
the time. As a result, the tap port can not been set any
vlan tag.Eventually, br-ens33, the physical bridge,
drops rarp packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with compute2_physicnet1
for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0 driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from neutron-server
to set correct vlan tag for vm port and adds relevant flow for ovs bridges that nova notifies neutron-server the
event that port changes binding_hostid in ml2 hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
|
2017-03-09 08:45:04 |
Zhipeng Shen |
description |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify
destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that
the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan --- There is not matched physical bridge at
the time. As a result, the tap port can not been set any
vlan tag.Eventually, br-ens33, the physical bridge,
drops rarp packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with compute2_physicnet1
for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0 driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from neutron-server
to set correct vlan tag for vm port and adds relevant flow for ovs bridges that nova notifies neutron-server the
event that port changes binding_hostid in ml2 hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify
destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that
the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan --- There is not matched physical bridge at
the time. As a result, the tap port can not been set any
vlan tag.Eventually, br-ens33, the physical bridge,
drops rarp packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port
binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with
compute2_physicnet1 for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0
driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from
neutron-server to set correct vlan tag for vm port and adds relevant flow for ovs bridges
that nova notifies neutron-server the event that port changes binding_hostid in ml2
hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
|
2017-03-09 08:46:01 |
Zhipeng Shen |
description |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify
destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that
the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan --- There is not matched physical bridge at
the time. As a result, the tap port can not been set any
vlan tag.Eventually, br-ens33, the physical bridge,
drops rarp packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port
binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with
compute2_physicnet1 for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0
driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from
neutron-server to set correct vlan tag for vm port and adds relevant flow for ovs bridges
that nova notifies neutron-server the event that port changes binding_hostid in ml2
hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify
destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that
the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan --- There is not matched physical bridge at
the time. As a result, the tap port can not been set any
vlan tag.Eventually, br-ens33, the physical bridge,
drops rarp packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port
binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with
compute2_physicnet1 for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0
driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from
neutron-server to set correct vlan tag for vm port and adds relevant flow for ovs bridges
that nova notifies neutron-server the event that port changes binding_hostid in ml2
hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
|
2017-03-09 08:48:08 |
Zhipeng Shen |
description |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --------- Notify
destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that
the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan --- There is not matched physical bridge at
the time. As a result, the tap port can not been set any
vlan tag.Eventually, br-ens33, the physical bridge,
drops rarp packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port
binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with
compute2_physicnet1 for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0
driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from
neutron-server to set correct vlan tag for vm port and adds relevant flow for ovs bridges
that nova notifies neutron-server the event that port changes binding_hostid in ml2
hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --- Notify
destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that
the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan --- There is not matched physical bridge at
the time. As a result, the tap port can not been set any
vlan tag.Eventually, br-ens33, the physical bridge,
drops rarp packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port
binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with
compute2_physicnet1 for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0
driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from
neutron-server to set correct vlan tag for vm port and adds relevant flow for ovs bridges
that nova notifies neutron-server the event that port changes binding_hostid in ml2
hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
|
2017-03-09 08:48:48 |
Zhipeng Shen |
description |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --- Notify
destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that
the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan --- There is not matched physical bridge at
the time. As a result, the tap port can not been set any
vlan tag.Eventually, br-ens33, the physical bridge,
drops rarp packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port
binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with
compute2_physicnet1 for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0
driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from
neutron-server to set correct vlan tag for vm port and adds relevant flow for ovs bridges
that nova notifies neutron-server the event that port changes binding_hostid in ml2
hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --- Notify
destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that
the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan --- There is not matched physical bridge at
the time. As a result, the tap port can not been set any
vlan tag.Eventually, br-ens33, the physical bridge,
drops rarp packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port
binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with
compute2_physicnet1 for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0
driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from
neutron-server to set correct vlan tag for vm port and adds relevant flow for ovs bridges
that nova notifies neutron-server the event that port changes binding_hostid in ml2
hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
|
2017-03-09 08:55:55 |
Zhipeng Shen |
bug task added |
|
neutron |
|
2017-03-09 15:28:46 |
Anindita Das |
tags |
|
live-migration |
|
2017-03-13 02:50:59 |
Zhipeng Shen |
description |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in my openstack environment.
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --- Notify
destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that
the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan --- There is not matched physical bridge at
the time. As a result, the tap port can not been set any
vlan tag.Eventually, br-ens33, the physical bridge,
drops rarp packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port
binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with
compute2_physicnet1 for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0
driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from
neutron-server to set correct vlan tag for vm port and adds relevant flow for ovs bridges
that nova notifies neutron-server the event that port changes binding_hostid in ml2
hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
Description
===========
Normaly, VM which migrates to destination node can send several RARP packets during KVM's live-migration in a simple ovs + vlan environment after a bug is fixed.
The ovs + vlan bug url:
https://bugs.launchpad.net/neutron/+bug/1414559
In neutron ML2 hierarchical port binding environment,
I find that the physical port associated to a vlan physical provider's ovs bridge on destination node cannot dump any rarp packets when VM migrates to destination node.
Steps to reproduce
==================
1. create a vxlan type network: netA
2. create a subnet for netA: subA
3. create a vm in compute1 node: vmA
4. tcpdump the physical port associated to a ovs bridge in compute2 node:
tcpdump -i ens33 -w ens33.pcap
5. live migrate the vm to the other compute node: compute2 node
6. open ens33.pcap in wireshark
Expected result
===============
find several rarp packets
Actual result
=============
find not any rarp packets
Environment
===========
OpenStack:Kilo 2015.1.2
OS: CentOS 7.1.1503
Libvirt:1.2.17
Logs & Configs
==============
hierarchical port binding configuration:
controller node:
#neutron
/etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan,vlan
tenant_network_types = vxlan,vlan
mechanism_drivers=ml2_h3c,openvswitch
#ml2_h3c, a mechanism driver owned by New H3C Group which is a provider of New IT
#solutions , allocates dynamic vlan segment for the existing mechanism driver
#"openvswitch"
[ml2_type_vlan]
network_vlan_ranges = compute1_physicnet1:100:1000, compute2_physicnet1:100:1000
[ml2_type_vxlan]
vni_ranges=1:500
compute1 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute1_physicnet1:br-ens33
compute2 node:
#neutron
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
bridge_mappings=compute2_physicnet1:br-ens33
Analysis
==============
After reading the live-migration relevant code of nova, neutron-server and neutron-openvswitch-agent, I think that it may be a bug.
The brief relevant process:
1. source compute node(nova-compute) compute1 node
self.driver(libvirt).live_migration
dom.migrateToURI2 ---------------Excecute migration to dest node
self._live_migration_monitor------------------ Monitor migration finished
self._post_live_migration ---------------- Migration finished
self.compute_rpcapi.post_live_migration_at_destination --- Notify
destination node
2.1. destination compute node (neutron-openvswitch-agent) compute2 node
rpc_loop ------ monitor vm's tapxxxx port plug
self.process_network_ports
self.treat_devices_added_or_updated
self.plugin_rpc.get_devices_details_list -------The port details shows that
the port still is bound to
"compute1_physicnet1", not the physical network
provider "compute2_physicnet1" existing in
destination compute node.
self.treat_vif_port
self.port_bound
self.provision_local_vlan --- There is not matched physical bridge at
the time. As a result, the tap port can not been set any
vlan tag.Eventually, br-ens33, the physical bridge,
drops rarp packets from the starting vm.
2.2 destination compute node (nova-compute) compute2 node
post_live_migration_at_destination nova/compute/manager.py
self.network_api.migrate_instance_finish
self._update_port_binding_for_instance ------------Notify neutron migrate port
binding:host_id
3. controller node(neutron-server)
ml2_h3c: fill self._new_bound_segment and self._next_segments_to_bind with
compute2_physicnet1 for openvswitch driver
openvswitch: bind port with compute2_physicnet1's allocated segment from level 0
driver ml2_h3c
In the current process of kilo, ml2 driver finishes port bind at the last step 3.
it's too late to make neutron-openvswitch-agent get suitable port details from
neutron-server to set correct vlan tag for vm port and adds relevant flow for ovs bridges
that nova notifies neutron-server the event that port changes binding_hostid in ml2
hierarchical port binding.
It seems that liberty, mitaka exists the same problem. |
|
2017-03-13 06:35:33 |
Zhipeng Shen |
information type |
Public |
Public Security |
|
2017-03-13 06:37:15 |
Zhipeng Shen |
information type |
Public Security |
Public |
|
2017-03-13 07:07:21 |
Zhipeng Shen |
nova: status |
New |
Confirmed |
|
2017-03-13 07:07:29 |
Zhipeng Shen |
nova: status |
Confirmed |
New |
|
2017-03-21 14:04:08 |
Dr. David Alan Gilbert |
bug |
|
|
added subscriber Dr. David Alan Gilbert |
2017-06-27 15:57:54 |
Sean Dague |
tags |
live-migration |
live-migration openstack-version.kilo |
|
2017-06-28 12:04:56 |
Sean Dague |
nova: status |
New |
In Progress |
|