[SRU] iptables_manager can run very slowly when a large number of security group rules are present
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Ubuntu Cloud Archive |
Fix Released
|
Undecided
|
Unassigned | ||
Icehouse |
Fix Committed
|
Undecided
|
Unassigned | ||
neutron |
Fix Released
|
Undecided
|
Kevin Benton | ||
Kilo |
Fix Released
|
Undecided
|
Unassigned | ||
neutron (Ubuntu) |
Fix Released
|
Medium
|
Unassigned | ||
Trusty |
Fix Released
|
Medium
|
Unassigned |
Bug Description
[Impact]
We have customers that typically add a few hundred security group rules or more. We also typically run 30+ VMs per compute node. When about 10+ VMs with a large SG set all get scheduled to the same node, the L2 agent (OVS) can spend many minutes in the iptables_
While there have been some patches that tried to address this in Juno and Kilo, they've either not helped as much as necessary, or broken SGs completely due to re-ordering the of the iptables rules.
I've been able to show some pretty bad scaling with just a handful of VMs running in devstack based on today's code (May 8th, 2015) from upstream Openstack.
[Test Case]
Here's what I tested:
1. I created a security group with 1000 TCP port rules (you could alternately have a smaller number of rules and more VMs, but it's quicker this way)
2. I booted VMs, specifying both the default and "large" SGs, and timed from the second it took Neutron to "learn" about the port until it completed it's work
3. I got a :( pretty quickly
And here's some data:
1-3 VM - didn't time, less than 20 seconds
4th VM - 0:36
5th VM - 0:53
6th VM - 1:11
7th VM - 1:25
8th VM - 1:48
9th VM - 2:14
While it's busy adding the rules, the OVS agent is consuming pretty close to 100% of a CPU for most of this time (from top):
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
25767 stack 20 0 157936 76572 4416 R 89.2 0.5 50:14.28 python
And this is with only ~10K rules at this point! When we start crossing the 20K point VM boot failures start to happen.
I'm filing this bug since we need to take a closer look at this in Liberty and fix it, it's been this way since Havana and needs some TLC.
I've attached a simple script I've used to recreate this, and will start taking a look at options here.
[Regression Potential]
Minimal since this has been running in upstream stable for several releases now (Kilo, Liberty, Mitaka).
Related branches
Changed in neutron: | |
assignee: | Brian Haley (brian-haley) → Kevin Benton (kevinbenton) |
status: | New → In Progress |
Changed in neutron: | |
assignee: | Kevin Benton (kevinbenton) → Brian Haley (brian-haley) |
Changed in neutron: | |
assignee: | Brian Haley (brian-haley) → Kevin Benton (kevinbenton) |
Changed in neutron: | |
milestone: | none → liberty-1 |
status: | Fix Committed → Fix Released |
Changed in neutron: | |
milestone: | liberty-1 → 7.0.0 |
tags: | added: trusty |
Changed in neutron (Ubuntu): | |
importance: | Undecided → Medium |
Changed in neutron (Ubuntu Trusty): | |
importance: | Undecided → Medium |
Changed in neutron (Ubuntu): | |
status: | Invalid → Fix Released |
Changed in cloud-archive: | |
status: | Invalid → Fix Released |
Changed in neutron (Ubuntu Trusty): | |
status: | Fix Committed → In Progress |
Kevin - I also think the change you mentioned at the bar might help too - make a chain for a particular SG containing it's rules, then have jump rules from the instance-specific chain to each SG chain it belongs too. That's deeper surgery in the firewall code, but should be doable. I can work on that after the summit (call me a slacker for not doing it during the summit :)