OpenStack: icehouse
OS: ubuntu
enable l2 population
enable gre tunnel
[Description]
If the dhcp and l3 agent on the same host, then after this host is down, then there will be a probability that scheduled to other same host, then sometimes the ovs tunnel can't be created on the new scheduled host.
[Root cause]
After debugging, we found below log:
2015-01-14 13:44:18.284 9815 INFO neutron.plugins.ml2.drivers.l2pop.db [req-e36fe1fe-a08c-43c9-9d9c-75fe714d6f91 None] query:[<neutron.db.model
s_v2.Port[object at 7f8d706a3650] {tenant_id=u'ae27091dccf148249349d6396e10f230', id=u'2061f5e4-c4a0-42ae-b611-4fe6c2c5cfbd', name=u'', network
_id=u'12ee7040-119a-47bf-a968-67509ebb8eda', mac_address=u'fa:16:3e:b6:20:8e', admin_state_up=True, status=u'ACTIVE', device_id=u'dhcp28f6fc30-
af6e-5f44-ae85-dcc1cc074ee5-12ee7040-119a-47bf-a968-67509ebb8eda', device_owner=u'network:dhcp'}>, <neutron.db.models_v2.Port[object at 7f8d706
a37d0] {tenant_id=u'ae27091dccf148249349d6396e10f230', id=u'6e99eae8-5c6a-4b8e-b9e1-dbd8d133dfa1', name=u'', network_id=u'12ee7040-119a-47bf-a9
68-67509ebb8eda', mac_address=u'fa:16:3e:22:56:ba', admin_state_up=True, status=u'ACTIVE', device_id=u'e63a0802-d86d-4a30-95fa-0005a6aef6fb', d
evice_owner=u'network:router_interface'}>]
Above shows there will be a probability that two ACTIVE ports shows up in db together, but from l2 pop mech_driver:
"
if agent_active_ports == 1 or ( self.get_agent_uptime(agent) < cfg.CONF.l2pop.agent_boot_time):
"
only in above condition the fdb entry will be added and notified to agent, so failures are pop up.
[Env]
OpenStack: icehouse
OS: ubuntu
enable l2 population
enable gre tunnel
[Description]
If the dhcp and l3 agent on the same host, then after this host is down, then there will be a probability that scheduled to other same host, then sometimes the ovs tunnel can't be created on the new scheduled host.
[Root cause] plugins. ml2.drivers. l2pop.db [req-e36fe1fe- a08c-43c9- 9d9c-75fe714d6f 91 None] query:[ <neutron. db.model id=u'ae27091dcc f148249349d6396 e10f230' , id=u'2061f5e4- c4a0-42ae- b611-4fe6c2c5cf bd', name=u'', network 119a-47bf- a968-67509ebb8e da', mac_address= u'fa:16: 3e:b6:20: 8e', admin_state_ up=True, status=u'ACTIVE', device_ id=u'dhcp28f6fc 30- ae85-dcc1cc074e e5-12ee7040- 119a-47bf- a968-67509ebb8e da', device_ owner=u' network: dhcp'}> , <neutron. db.models_ v2.Port[ object at 7f8d706 id=u'ae27091dcc f148249349d6396 e10f230' , id=u'6e99eae8- 5c6a-4b8e- b9e1-dbd8d133df a1', name=u'', network_ id=u'12ee7040- 119a-47bf- a9 u'fa:16: 3e:22:56: ba', admin_state_ up=True, status=u'ACTIVE', device_ id=u'e63a0802- d86d-4a30- 95fa-0005a6aef6 fb', d u'network: router_ interface' }>]
After debugging, we found below log:
2015-01-14 13:44:18.284 9815 INFO neutron.
s_v2.Port[object at 7f8d706a3650] {tenant_
_id=u'12ee7040-
af6e-5f44-
a37d0] {tenant_
68-67509ebb8eda', mac_address=
evice_owner=
Above shows there will be a probability that two ACTIVE ports shows up in db together, but from l2 pop mech_driver:
self. get_agent_ uptime( agent) < cfg.CONF. l2pop.agent_ boot_time) :
"
if agent_active_ports == 1 or (
"
only in above condition the fdb entry will be added and notified to agent, so failures are pop up.