2017-09-07 00:54:21 |
Daniel Axtens |
bug |
|
|
added bug |
2017-09-08 15:57:01 |
Bryan Quigley |
bug |
|
|
added subscriber Bryan Quigley |
2017-09-09 16:24:44 |
Dominique Poulain |
bug |
|
|
added subscriber Dominique Poulain |
2018-02-06 04:44:53 |
Daniel Axtens |
description |
(This bug provides a place to track the progress of this issue upstream and then in to Ubuntu.)
A ppc64le system runs as a guest under PowerVM. This guest has a bnx2x card attached, and uses openvswitch to bridge an ibmveth interface for traffic from other LPARs.
We see the following crash sometimes when running netperf:
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_attn_int_deasserted3:4323(enP24p1s0f2)]MC assert!
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:720(enP24p1s0f2)]XSTORM_ASSERT_LIST_INDEX 0x2
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:736(enP24p1s0f2)]XSTORM_ASSERT_INDEX 0x0 = 0x00000000 0x25e42a7e 0x00462a38 0x00010052
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:750(enP24p1s0f2)]Chip Revision: everest3, FW Version: 7_13_1
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_attn_int_deasserted3:4329(enP24p1s0f2)]driver assert
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_panic_dump:923(enP24p1s0f2)]begin crash dump -----------------
... (dump of registers follows) ...
Subsequent debugging reveals that the packets causing the issue come through the ibmveth interface - from the AIX LPAR. The veth protocol is 'special' - communication between LPARs on the same chassis can use very large (64k) frames to reduce overhead. Normal networks cannot handle such large packets, so traditionally, the VIOS partition would signal to the AIX partitions that it was 'special', and AIX would send regular, ethernet-sized packets to VIOS, which VIOS would then send out.
This signalling between VIOS and AIX is done in a way that is not standards-compliant, and so was never made part of Linux. Instead, the Linux driver has always understood large frames and passed them up the network stack.
In some cases (e.g. with TCP), multiple TCP segments are coalesced into one large packet. In Linux, this goes through the generic receive offload code, using a similar mechanism to GSO. These segments can be very large which presents as a very large MSS (maximum segment size) or gso_size.
Normally, the large packet is simply passed to whatever network application on Linux is going to consume it, and everything is OK.
However, in this case, the packets go through Open vSwitch, and are then passed to the bnx2x driver. The bnx2x driver/hardware supports TSO and GSO, but with a restriction: the maximum segment size is limited to around 9700 bytes. Normally this is more than adequate as jumbo frames are limited to 9000 bytes. However, if a large packet with large (>9700 byte) TCP segments arrives through ibmveth, and is passed to bnx2x, the hardware will panic.
Turning off TSO prevents the crash as the kernel resegments the data and assembles the packets in software. This has a performance cost.
Clearly at the very least, bnx2x should not crash in this case.
One patch to do this was sent upstream: https://www.spinics.net/lists/netdev/msg452932.html |
SRU Justification
=================
A ppc64le system runs as a guest under PowerVM. This guest has a bnx2x card attached, and uses openvswitch to bridge an ibmveth interface for traffic from other LPARs.
We see the following crash sometimes when running netperf:
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_attn_int_deasserted3:4323(enP24p1s0f2)]MC assert!
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:720(enP24p1s0f2)]XSTORM_ASSERT_LIST_INDEX 0x2
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:736(enP24p1s0f2)]XSTORM_ASSERT_INDEX 0x0 = 0x00000000 0x25e42a7e 0x00462a38 0x00010052
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:750(enP24p1s0f2)]Chip Revision: everest3, FW Version: 7_13_1
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_attn_int_deasserted3:4329(enP24p1s0f2)]driver assert
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_panic_dump:923(enP24p1s0f2)]begin crash dump -----------------
... (dump of registers follows) ...
Subsequent debugging reveals that the packets causing the issue come through the ibmveth interface - from the AIX LPAR. The veth protocol is 'special' - communication between LPARs on the same chassis can use very large (64k) frames to reduce overhead. Normal networks cannot handle such large packets, so traditionally, the VIOS partition would signal to the AIX partitions that it was 'special', and AIX would send regular, ethernet-sized packets to VIOS, which VIOS would then send out.
This signalling between VIOS and AIX is done in a way that is not standards-compliant, and so was never made part of Linux. Instead, the Linux driver has always understood large frames and passed them up the network stack.
In some cases (e.g. with TCP), multiple TCP segments are coalesced into one large packet. In Linux, this goes through the generic receive offload code, using a similar mechanism to GSO. These segments can be very large which presents as a very large MSS (maximum segment size) or gso_size.
Normally, the large packet is simply passed to whatever network application on Linux is going to consume it, and everything is OK.
However, in this case, the packets go through Open vSwitch, and are then passed to the bnx2x driver. The bnx2x driver/hardware supports TSO and GSO, but with a restriction: the maximum segment size is limited to around 9700 bytes. Normally this is more than adequate. However, if a large packet with very large (>9700 byte) TCP segments arrives through ibmveth, and is passed to bnx2x, the hardware will panic.
Impact
------
bnx2x card panics, requiring power cycle to restore functionality.
The workaround is turning off TSO, which prevents the crash as the kernel resegments *all* packets in software, not just ones that are too big. This has a performance cost.
Fix
---
Test packet size in bnx2x feature check path.
Regression Potential
--------------------
Limited to bnx2x card driver.
The most likely failure case is a false-positive on the size check, which would lead to a performance regression only. |
|
2018-02-06 06:48:26 |
Daniel Axtens |
description |
SRU Justification
=================
A ppc64le system runs as a guest under PowerVM. This guest has a bnx2x card attached, and uses openvswitch to bridge an ibmveth interface for traffic from other LPARs.
We see the following crash sometimes when running netperf:
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_attn_int_deasserted3:4323(enP24p1s0f2)]MC assert!
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:720(enP24p1s0f2)]XSTORM_ASSERT_LIST_INDEX 0x2
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:736(enP24p1s0f2)]XSTORM_ASSERT_INDEX 0x0 = 0x00000000 0x25e42a7e 0x00462a38 0x00010052
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:750(enP24p1s0f2)]Chip Revision: everest3, FW Version: 7_13_1
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_attn_int_deasserted3:4329(enP24p1s0f2)]driver assert
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_panic_dump:923(enP24p1s0f2)]begin crash dump -----------------
... (dump of registers follows) ...
Subsequent debugging reveals that the packets causing the issue come through the ibmveth interface - from the AIX LPAR. The veth protocol is 'special' - communication between LPARs on the same chassis can use very large (64k) frames to reduce overhead. Normal networks cannot handle such large packets, so traditionally, the VIOS partition would signal to the AIX partitions that it was 'special', and AIX would send regular, ethernet-sized packets to VIOS, which VIOS would then send out.
This signalling between VIOS and AIX is done in a way that is not standards-compliant, and so was never made part of Linux. Instead, the Linux driver has always understood large frames and passed them up the network stack.
In some cases (e.g. with TCP), multiple TCP segments are coalesced into one large packet. In Linux, this goes through the generic receive offload code, using a similar mechanism to GSO. These segments can be very large which presents as a very large MSS (maximum segment size) or gso_size.
Normally, the large packet is simply passed to whatever network application on Linux is going to consume it, and everything is OK.
However, in this case, the packets go through Open vSwitch, and are then passed to the bnx2x driver. The bnx2x driver/hardware supports TSO and GSO, but with a restriction: the maximum segment size is limited to around 9700 bytes. Normally this is more than adequate. However, if a large packet with very large (>9700 byte) TCP segments arrives through ibmveth, and is passed to bnx2x, the hardware will panic.
Impact
------
bnx2x card panics, requiring power cycle to restore functionality.
The workaround is turning off TSO, which prevents the crash as the kernel resegments *all* packets in software, not just ones that are too big. This has a performance cost.
Fix
---
Test packet size in bnx2x feature check path.
Regression Potential
--------------------
Limited to bnx2x card driver.
The most likely failure case is a false-positive on the size check, which would lead to a performance regression only. |
SRU Justification
=================
A ppc64le system runs as a guest under PowerVM. This guest has a bnx2x card attached, and uses openvswitch to bridge an ibmveth interface for traffic from other LPARs.
We see the following crash sometimes when running netperf:
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_attn_int_deasserted3:4323(enP24p1s0f2)]MC assert!
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:720(enP24p1s0f2)]XSTORM_ASSERT_LIST_INDEX 0x2
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:736(enP24p1s0f2)]XSTORM_ASSERT_INDEX 0x0 = 0x00000000 0x25e42a7e 0x00462a38 0x00010052
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:750(enP24p1s0f2)]Chip Revision: everest3, FW Version: 7_13_1
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_attn_int_deasserted3:4329(enP24p1s0f2)]driver assert
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_panic_dump:923(enP24p1s0f2)]begin crash dump -----------------
... (dump of registers follows) ...
Subsequent debugging reveals that the packets causing the issue come through the ibmveth interface - from the AIX LPAR. The veth protocol is 'special' - communication between LPARs on the same chassis can use very large (64k) frames to reduce overhead. Normal networks cannot handle such large packets, so traditionally, the VIOS partition would signal to the AIX partitions that it was 'special', and AIX would send regular, ethernet-sized packets to VIOS, which VIOS would then send out.
This signalling between VIOS and AIX is done in a way that is not standards-compliant, and so was never made part of Linux. Instead, the Linux driver has always understood large frames and passed them up the network stack.
In some cases (e.g. with TCP), multiple TCP segments are coalesced into one large packet. In Linux, this goes through the generic receive offload code, using a similar mechanism to GSO. These segments can be very large which presents as a very large MSS (maximum segment size) or gso_size.
Normally, the large packet is simply passed to whatever network application on Linux is going to consume it, and everything is OK.
However, in this case, the packets go through Open vSwitch, and are then passed to the bnx2x driver. The bnx2x driver/hardware supports TSO and GSO, but with a restriction: the maximum segment size is limited to around 9700 bytes. Normally this is more than adequate. However, if a large packet with very large (>9700 byte) TCP segments arrives through ibmveth, and is passed to bnx2x, the hardware will panic.
Impact
------
bnx2x card panics, requiring power cycle to restore functionality.
The workaround is turning off TSO, which prevents the crash as the kernel resegments *all* packets in software, not just ones that are too big. This has a performance cost.
Fix
---
Test packet size in bnx2x feature check path and disable GSO if it is too large.
Regression Potential
--------------------
Limited to bnx2x card driver.
The most likely failure case is a false-positive on the size check, which would lead to a performance regression only. |
|
2018-02-08 22:54:40 |
Daniel Axtens |
cve linked |
|
2018-1000026 |
|
2018-02-08 22:54:48 |
Daniel Axtens |
description |
SRU Justification
=================
A ppc64le system runs as a guest under PowerVM. This guest has a bnx2x card attached, and uses openvswitch to bridge an ibmveth interface for traffic from other LPARs.
We see the following crash sometimes when running netperf:
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_attn_int_deasserted3:4323(enP24p1s0f2)]MC assert!
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:720(enP24p1s0f2)]XSTORM_ASSERT_LIST_INDEX 0x2
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:736(enP24p1s0f2)]XSTORM_ASSERT_INDEX 0x0 = 0x00000000 0x25e42a7e 0x00462a38 0x00010052
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:750(enP24p1s0f2)]Chip Revision: everest3, FW Version: 7_13_1
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_attn_int_deasserted3:4329(enP24p1s0f2)]driver assert
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_panic_dump:923(enP24p1s0f2)]begin crash dump -----------------
... (dump of registers follows) ...
Subsequent debugging reveals that the packets causing the issue come through the ibmveth interface - from the AIX LPAR. The veth protocol is 'special' - communication between LPARs on the same chassis can use very large (64k) frames to reduce overhead. Normal networks cannot handle such large packets, so traditionally, the VIOS partition would signal to the AIX partitions that it was 'special', and AIX would send regular, ethernet-sized packets to VIOS, which VIOS would then send out.
This signalling between VIOS and AIX is done in a way that is not standards-compliant, and so was never made part of Linux. Instead, the Linux driver has always understood large frames and passed them up the network stack.
In some cases (e.g. with TCP), multiple TCP segments are coalesced into one large packet. In Linux, this goes through the generic receive offload code, using a similar mechanism to GSO. These segments can be very large which presents as a very large MSS (maximum segment size) or gso_size.
Normally, the large packet is simply passed to whatever network application on Linux is going to consume it, and everything is OK.
However, in this case, the packets go through Open vSwitch, and are then passed to the bnx2x driver. The bnx2x driver/hardware supports TSO and GSO, but with a restriction: the maximum segment size is limited to around 9700 bytes. Normally this is more than adequate. However, if a large packet with very large (>9700 byte) TCP segments arrives through ibmveth, and is passed to bnx2x, the hardware will panic.
Impact
------
bnx2x card panics, requiring power cycle to restore functionality.
The workaround is turning off TSO, which prevents the crash as the kernel resegments *all* packets in software, not just ones that are too big. This has a performance cost.
Fix
---
Test packet size in bnx2x feature check path and disable GSO if it is too large.
Regression Potential
--------------------
Limited to bnx2x card driver.
The most likely failure case is a false-positive on the size check, which would lead to a performance regression only. |
SRU Justification
=================
A ppc64le system runs as a guest under PowerVM. This guest has a bnx2x card attached, and uses openvswitch to bridge an ibmveth interface for traffic from other LPARs.
We see the following crash sometimes when running netperf:
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_attn_int_deasserted3:4323(enP24p1s0f2)]MC assert!
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:720(enP24p1s0f2)]XSTORM_ASSERT_LIST_INDEX 0x2
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:736(enP24p1s0f2)]XSTORM_ASSERT_INDEX 0x0 = 0x00000000 0x25e42a7e 0x00462a38 0x00010052
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:750(enP24p1s0f2)]Chip Revision: everest3, FW Version: 7_13_1
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_attn_int_deasserted3:4329(enP24p1s0f2)]driver assert
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_panic_dump:923(enP24p1s0f2)]begin crash dump -----------------
... (dump of registers follows) ...
Subsequent debugging reveals that the packets causing the issue come through the ibmveth interface - from the AIX LPAR. The veth protocol is 'special' - communication between LPARs on the same chassis can use very large (64k) frames to reduce overhead. Normal networks cannot handle such large packets, so traditionally, the VIOS partition would signal to the AIX partitions that it was 'special', and AIX would send regular, ethernet-sized packets to VIOS, which VIOS would then send out.
This signalling between VIOS and AIX is done in a way that is not standards-compliant, and so was never made part of Linux. Instead, the Linux driver has always understood large frames and passed them up the network stack.
In some cases (e.g. with TCP), multiple TCP segments are coalesced into one large packet. In Linux, this goes through the generic receive offload code, using a similar mechanism to GSO. These segments can be very large which presents as a very large MSS (maximum segment size) or gso_size.
Normally, the large packet is simply passed to whatever network application on Linux is going to consume it, and everything is OK.
However, in this case, the packets go through Open vSwitch, and are then passed to the bnx2x driver. The bnx2x driver/hardware supports TSO and GSO, but with a restriction: the maximum segment size is limited to around 9700 bytes. Normally this is more than adequate. However, if a large packet with very large (>9700 byte) TCP segments arrives through ibmveth, and is passed to bnx2x, the hardware will panic.
[Impact]
bnx2x card panics, requiring power cycle to restore functionality.
The workaround is turning off TSO, which prevents the crash as the kernel resegments *all* packets in software, not just ones that are too big. This has a performance cost.
[Fix]
Test packet size in bnx2x feature check path and disable GSO if it is too large.
[Regression Potential]
Limited to bnx2x card driver.
The most likely failure case is a false-positive on the size check, which would lead to a performance regression only. |
|
2018-02-09 00:08:10 |
Daniel Axtens |
description |
SRU Justification
=================
A ppc64le system runs as a guest under PowerVM. This guest has a bnx2x card attached, and uses openvswitch to bridge an ibmveth interface for traffic from other LPARs.
We see the following crash sometimes when running netperf:
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_attn_int_deasserted3:4323(enP24p1s0f2)]MC assert!
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:720(enP24p1s0f2)]XSTORM_ASSERT_LIST_INDEX 0x2
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:736(enP24p1s0f2)]XSTORM_ASSERT_INDEX 0x0 = 0x00000000 0x25e42a7e 0x00462a38 0x00010052
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:750(enP24p1s0f2)]Chip Revision: everest3, FW Version: 7_13_1
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_attn_int_deasserted3:4329(enP24p1s0f2)]driver assert
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_panic_dump:923(enP24p1s0f2)]begin crash dump -----------------
... (dump of registers follows) ...
Subsequent debugging reveals that the packets causing the issue come through the ibmveth interface - from the AIX LPAR. The veth protocol is 'special' - communication between LPARs on the same chassis can use very large (64k) frames to reduce overhead. Normal networks cannot handle such large packets, so traditionally, the VIOS partition would signal to the AIX partitions that it was 'special', and AIX would send regular, ethernet-sized packets to VIOS, which VIOS would then send out.
This signalling between VIOS and AIX is done in a way that is not standards-compliant, and so was never made part of Linux. Instead, the Linux driver has always understood large frames and passed them up the network stack.
In some cases (e.g. with TCP), multiple TCP segments are coalesced into one large packet. In Linux, this goes through the generic receive offload code, using a similar mechanism to GSO. These segments can be very large which presents as a very large MSS (maximum segment size) or gso_size.
Normally, the large packet is simply passed to whatever network application on Linux is going to consume it, and everything is OK.
However, in this case, the packets go through Open vSwitch, and are then passed to the bnx2x driver. The bnx2x driver/hardware supports TSO and GSO, but with a restriction: the maximum segment size is limited to around 9700 bytes. Normally this is more than adequate. However, if a large packet with very large (>9700 byte) TCP segments arrives through ibmveth, and is passed to bnx2x, the hardware will panic.
[Impact]
bnx2x card panics, requiring power cycle to restore functionality.
The workaround is turning off TSO, which prevents the crash as the kernel resegments *all* packets in software, not just ones that are too big. This has a performance cost.
[Fix]
Test packet size in bnx2x feature check path and disable GSO if it is too large.
[Regression Potential]
Limited to bnx2x card driver.
The most likely failure case is a false-positive on the size check, which would lead to a performance regression only. |
SRU Justification
=================
A ppc64le system runs as a guest under PowerVM. This guest has a bnx2x card attached, and uses openvswitch to bridge an ibmveth interface for traffic from other LPARs.
We see the following crash sometimes when running netperf:
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_attn_int_deasserted3:4323(enP24p1s0f2)]MC assert!
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:720(enP24p1s0f2)]XSTORM_ASSERT_LIST_INDEX 0x2
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:736(enP24p1s0f2)]XSTORM_ASSERT_INDEX 0x0 = 0x00000000 0x25e42a7e 0x00462a38 0x00010052
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_mc_assert:750(enP24p1s0f2)]Chip Revision: everest3, FW Version: 7_13_1
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_attn_int_deasserted3:4329(enP24p1s0f2)]driver assert
May 10 17:16:32 tuk6r1phn2 kernel: bnx2x: [bnx2x_panic_dump:923(enP24p1s0f2)]begin crash dump -----------------
... (dump of registers follows) ...
Subsequent debugging reveals that the packets causing the issue come through the ibmveth interface - from the AIX LPAR. The veth protocol is 'special' - communication between LPARs on the same chassis can use very large (64k) frames to reduce overhead. Normal networks cannot handle such large packets, so traditionally, the VIOS partition would signal to the AIX partitions that it was 'special', and AIX would send regular, ethernet-sized packets to VIOS, which VIOS would then send out.
This signalling between VIOS and AIX is done in a way that is not standards-compliant, and so was never made part of Linux. Instead, the Linux driver has always understood large frames and passed them up the network stack.
In some cases (e.g. with TCP), multiple TCP segments are coalesced into one large packet. In Linux, this goes through the generic receive offload code, using a similar mechanism to GSO. These segments can be very large which presents as a very large MSS (maximum segment size) or gso_size.
Normally, the large packet is simply passed to whatever network application on Linux is going to consume it, and everything is OK.
However, in this case, the packets go through Open vSwitch, and are then passed to the bnx2x driver. The bnx2x driver/hardware supports TSO and GSO, but with a restriction: the maximum segment size is limited to around 9700 bytes. Normally this is more than adequate. However, if a large packet with very large (>9700 byte) TCP segments arrives through ibmveth, and is passed to bnx2x, the hardware will panic.
[Impact]
bnx2x card panics, requiring power cycle to restore functionality.
The workaround is turning off TSO, which prevents the crash as the kernel resegments *all* packets in software, not just ones that are too big. This has a performance cost.
[Fix]
Test packet size in bnx2x feature check path and disable GSO if it is too large. To do this we move a function from one file to another and add another in the networking core.
[Regression Potential]
A/B/X: The changes to the network core are easily reviewed. The changes to behaviour are limited to the bnx2x card driver.
The most likely failure case is a false-positive on the size check, which would lead to a performance regression only.
T: This also involves a different change to the networking core to add the old-style GSO checking, which is more invasive. However the changes are simple and easily reviewed. |
|
2018-02-22 15:48:03 |
Launchpad Janitor |
linux (Ubuntu): status |
Confirmed |
Fix Released |
|
2018-02-22 15:48:03 |
Launchpad Janitor |
cve linked |
|
2017-5715 |
|
2018-02-28 16:59:42 |
Kleber Sacilotto de Souza |
nominated for series |
|
Ubuntu Artful |
|
2018-02-28 16:59:42 |
Kleber Sacilotto de Souza |
bug task added |
|
linux (Ubuntu Artful) |
|
2018-02-28 16:59:48 |
Kleber Sacilotto de Souza |
linux (Ubuntu Artful): status |
New |
Fix Committed |
|
2018-03-01 12:01:54 |
Kleber Sacilotto de Souza |
nominated for series |
|
Ubuntu Xenial |
|
2018-03-01 12:01:54 |
Kleber Sacilotto de Souza |
bug task added |
|
linux (Ubuntu Xenial) |
|
2018-03-01 12:01:58 |
Kleber Sacilotto de Souza |
linux (Ubuntu Xenial): status |
New |
Fix Committed |
|
2018-03-01 12:06:17 |
Kleber Sacilotto de Souza |
nominated for series |
|
Ubuntu Trusty |
|
2018-03-01 12:06:17 |
Kleber Sacilotto de Souza |
bug task added |
|
linux (Ubuntu Trusty) |
|
2018-03-01 12:06:22 |
Kleber Sacilotto de Souza |
linux (Ubuntu Trusty): status |
New |
Fix Committed |
|
2018-03-19 10:55:06 |
Stefan Bader |
tags |
|
verification-needed-trusty |
|
2018-03-19 10:55:38 |
Stefan Bader |
tags |
verification-needed-trusty |
verification-needed-trusty verification-needed-xenial |
|
2018-03-19 10:57:08 |
Stefan Bader |
tags |
verification-needed-trusty verification-needed-xenial |
verification-needed-artful verification-needed-trusty verification-needed-xenial |
|
2018-03-26 16:58:02 |
r.stricklin |
tags |
verification-needed-artful verification-needed-trusty verification-needed-xenial |
verification-done-xenial verification-needed-artful verification-needed-trusty |
|
2018-03-28 22:15:44 |
Daniel Axtens |
tags |
verification-done-xenial verification-needed-artful verification-needed-trusty |
verification-done-artful verification-done-xenial verification-needed-trusty |
|
2018-04-03 14:10:10 |
Launchpad Janitor |
linux (Ubuntu Artful): status |
Fix Committed |
Fix Released |
|
2018-04-03 14:10:10 |
Launchpad Janitor |
cve linked |
|
2017-0861 |
|
2018-04-03 14:10:10 |
Launchpad Janitor |
cve linked |
|
2017-1000407 |
|
2018-04-03 14:10:10 |
Launchpad Janitor |
cve linked |
|
2017-15129 |
|
2018-04-03 14:10:10 |
Launchpad Janitor |
cve linked |
|
2017-16994 |
|
2018-04-03 14:10:10 |
Launchpad Janitor |
cve linked |
|
2017-17448 |
|
2018-04-03 14:10:10 |
Launchpad Janitor |
cve linked |
|
2017-17450 |
|
2018-04-03 14:10:10 |
Launchpad Janitor |
cve linked |
|
2017-17741 |
|
2018-04-03 14:10:10 |
Launchpad Janitor |
cve linked |
|
2017-17805 |
|
2018-04-03 14:10:10 |
Launchpad Janitor |
cve linked |
|
2017-17806 |
|
2018-04-03 14:10:10 |
Launchpad Janitor |
cve linked |
|
2017-17807 |
|
2018-04-03 14:10:10 |
Launchpad Janitor |
cve linked |
|
2018-5332 |
|
2018-04-03 14:10:10 |
Launchpad Janitor |
cve linked |
|
2018-5333 |
|
2018-04-03 14:10:10 |
Launchpad Janitor |
cve linked |
|
2018-5344 |
|
2018-04-04 09:21:37 |
Launchpad Janitor |
linux (Ubuntu Trusty): status |
Fix Committed |
Fix Released |
|
2018-04-04 09:21:37 |
Launchpad Janitor |
cve linked |
|
2017-11089 |
|
2018-04-04 09:21:37 |
Launchpad Janitor |
cve linked |
|
2017-12762 |
|
2018-04-04 09:27:25 |
Launchpad Janitor |
linux (Ubuntu Xenial): status |
Fix Committed |
Fix Released |
|
2018-04-04 09:27:25 |
Launchpad Janitor |
cve linked |
|
2017-16995 |
|
2018-04-04 09:27:25 |
Launchpad Janitor |
cve linked |
|
2017-17862 |
|
2018-04-04 09:27:25 |
Launchpad Janitor |
cve linked |
|
2017-5753 |
|
2018-04-04 09:27:25 |
Launchpad Janitor |
cve linked |
|
2018-8043 |
|