[SR-IOV] An instance with 2 SR-IOV VF interfaces fails to boot if a compute has 2 SR-IOV NICs in same physnet
Bug #1576185 reported by
Mikhail Chernik
This bug affects 1 person
Affects | Status | Importance | Assigned to | Milestone | ||
---|---|---|---|---|---|---|
Mirantis OpenStack | Status tracked in 10.0.x | |||||
10.0.x |
Fix Committed
|
High
|
Elena Ezhova |
Bug Description
Environment:
MOS 9.0 ISO 232
1 controller + 2 computes
compute node-1: , 2x10G 82599 NICs. 2nd port is used for SR-IOV, 24 VFs, physnet2
compute node-3: , 2x10G 82599 NICs. Both ports are used for SR-IOV, 24 VFs per port, same physnett (physnet2)
nova-compute.log: http://
Expected result:
Instance is in ACTIVE state on any compute node
Actual result:
Instance is in ERROR state after timeout on compute node with 2 SR-IOV NICs in same physnet
Steps to reproduce:
Run this script on freshly deployed environment
http://
Diagnostic snapshot: http://
Changed in mos: | |
assignee: | nobody → MOS Nova (mos-nova) |
importance: | Undecided → High |
status: | New → Confirmed |
tags: | added: area-neutron area-nova |
Changed in mos: | |
assignee: | MOS Neutron (mos-neutron) → Oleg Bondarev (obondarev) |
Changed in mos: | |
assignee: | Oleg Bondarev (obondarev) → Elena Ezhova (eezhova) |
Changed in mos: | |
status: | Confirmed → In Progress |
Changed in mos: | |
status: | In Progress → Fix Committed |
To post a comment you must log in.
Can't reproduce on 9.0 iso #250. Also I've found the following errors in logs from your diagnostic snapshot:
node-3: neutron- sriov-agent. log
2016-04-28 11:15:20.704 33100 ERROR neutron. agent.linux. utils [req-9f230b78- 31b1-483c- bdd4-8a29aedf96 79 - - - - -] Exit code: 2; Stdin: ; Stdout: ; Stderr: RTNETLINK answers: Operation not supported plugins. ml2.drivers. mech_sriov. agent.sriov_ nic_agent [req-9f230b78- 31b1-483c- bdd4-8a29aedf96 79 - - - - -] Device fa:16:3e:ae:13:e5 does not support state change y.lockutils [req-9f230b78- 31b1-483c- bdd4-8a29aedf96 79 - - - - -] Lock "qos-port" acquired by "neutron. agent.l2. extensions. qos.handle_ port" :: waited 0.000s inner /usr/lib/ python2. 7/dist- packages/ oslo_concurrenc y/lockutils. py:270 agent.l2. extensions. qos [req-9f230b78- 31b1-483c- bdd4-8a29aedf96 79 - - - - -] QoS extension did have no information about the port ec35f707- 7e9e-4d93- 850c-de765411f5 f1 that we were trying to reset
2016-04-28 11:15:20.705 33100 WARNING neutron.
2016-04-28 11:15:20.840 33100 DEBUG oslo_concurrenc
2016-04-28 11:15:20.841 33100 INFO neutron.
Did you enable Neutron QoS on your environment?