When deploying Octavia on a bionic-stein cloud, we are finding that all loadbalancers after the 10th loadbalancer are going into immediate error state with logs showing that neutron notes quota limits reached on security_groups for the services project that octavia runs under.
It is my belief that the quotas assigned to projects such as "loadbalancers, pools, listeners, ports, etc" should be the limiting factor for how many loadbalancers/pools can be managed, however, there is a limitation in the back-end for the service_domain/services project that by default allows secgroups of 1000 or -1, secgroup-rules of 1000 or -1, and ports of 1000 or -1 so that octavia doesn't fail in creating loadbalancers due to services project quotas.
I believe the octavia charm should manage setting reasonable values for all associated quota elements in the network and compute stacks for managing amphorae VMs, VIPs, and ports.
Respectfully, I think this is a higher than wishlist bug; our default configuration is quite simply broken. If you use it, there's a landmine waiting for you the first time someone uses Octavia that is not trivial to debug if you've not run into it before.