haproxy does not make use of all available vcpus
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
octavia (Ubuntu) |
Triaged
|
Wishlist
|
Unassigned |
Bug Description
[already filed upstream as https:/
HAproxy >= 1.8 offers a nbthread option to configure how many threads the haproxy process will use[0]. Scaling beyond 1 thread is useful as it is known to improve performance in busy loadbalancers, especially when doing TLS termination.
Octavia does not however configure nbthread, and as a result amphorae with more than one vcpu at their disposal do not perform any better than those with a single core available.
As a workaround, admins can provide a fully custom haproxy Jinja template via the haproxy_template octavia.conf option[1], but since templates are shared by all loadbalancers and their rendering is performed by the octavia server, this effectively forces the cloud admin to choose a single (nova) flavor for all their loadbalancers.
I propose for the default haproxy.cfg templates to be modified to include setting the nbthread option automatically to the number of vcpus a specific amphora has. The parameter could then optionally be exposed via the octavia API to allow operators to override it (via the octaviaclient and Horizon).
Note: the behavior described above aligns with what HAproxy 2.0 already does automatically[
Note: nbthread is not supported by HAproxy < 1.8, and the haproxy process refuses to start if it finds unknown config keys. We would therefore need to also have a mechanism to prevent pushing this config option to amphorae using older haproxies (e.g. Ubuntu Xenial + OpenStack Queens). The manual override mentioned above could be a potential workaround.
[0] https:/
[1] https:/
[2] https:/
[3] on supported platforms, and tested to be true in Ubuntu 20.04
Changed in octavia (Ubuntu): | |
status: | New → Triaged |
importance: | Undecided → Wishlist |