It might not be true at all and I have done some investigation after creating this bug report a month ago.
My notes from back then:
- iperf tcp vs udp performance measurements vary with QCA6174;
- iperf (2.0.5 2010) does report 200 Mbit/s on an Intel card (tcp workload);
- 200 Mbit/s is achievable on QCA6174 when generating UDP workloads via iperf;
- Upload performance (laptop -> router -> server) via rsync over ssh caps at 4 MB/s ~~ 32 Mbit/s at first. May even reach 6 MB/s ~ 48 Mbit/s or 8.5 MB at some point. But it never reaches the peak UDP rates of 200Mbit/s. This might seem ok but the fact is that stats are different for the Intel card.
- Download (server -> router -> laptop) performance with rsync caps at 62.69 MB/s (megabytes per second).
- This is not a server HDD bottleneck - the destination storage was an SSD.
- A TCP workload (iperf -c <addr>) initiated from the server side 578 Mbits/sec to iperf -s on the QCA6174 WNIC side caps at 578 Mbit/s ~ 72 MB/s
This maps well to what you are saying about the longstanding problem with ath10k & TCP stack.
Kalle,
The chip firmware was a guess based upon somebody else's feedback: http:// lists.infradead .org/pipermail/ ath10k/ 2016-January/ 006714. html
It might not be true at all and I have done some investigation after creating this bug report a month ago.
My notes from back then:
- iperf tcp vs udp performance measurements vary with QCA6174;
- iperf (2.0.5 2010) does report 200 Mbit/s on an Intel card (tcp workload);
- 200 Mbit/s is achievable on QCA6174 when generating UDP workloads via iperf;
- Upload performance (laptop -> router -> server) via rsync over ssh caps at 4 MB/s ~~ 32 Mbit/s at first. May even reach 6 MB/s ~ 48 Mbit/s or 8.5 MB at some point. But it never reaches the peak UDP rates of 200Mbit/s. This might seem ok but the fact is that stats are different for the Intel card.
- Download (server -> router -> laptop) performance with rsync caps at 62.69 MB/s (megabytes per second).
- This is not a server HDD bottleneck - the destination storage was an SSD.
- A TCP workload (iperf -c <addr>) initiated from the server side 578 Mbits/sec to iperf -s on the QCA6174 WNIC side caps at 578 Mbit/s ~ 72 MB/s
This maps well to what you are saying about the longstanding problem with ath10k & TCP stack.
I will try out the hack to confirm.