'Flow table limit' is different from the actual one
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Juniper Openstack |
Invalid
|
High
|
Divakar Dharanalakota | ||
R2.21.x |
Fix Released
|
High
|
Divakar Dharanalakota |
Bug Description
Hi Team,
Customer reported below issue in Contrail 2.21.3-55 and 2.21.3-57
Even if 'Flow Table limit: 524288' is set, it reached the upper limit of flow entry at around 200K and the value of flow table full is up.
As per their observation the above issue is not observed in Contrail Contrail 2.21.2-36.
*Build36
root@JunCom00:~# vrouter --info
vRouter module version 2.21.2 (Built by contrail-
Interfaces limit 4352
VRF tables limit 4096
NextHops limit 65536
MPLS Labels limit 11520
Bridge Table limit 262144
Bridge Table Overflow limit 4096
Flow Table limit 524288
Flow Table overflow limit 8192
Mirror entries limit 255
root@JunCom00:~# cat /etc/modprobe.
options vrouter vr_mpls_
2017-06-20 19:26:46 +0900
Flow Statistics
---------------
Total Entries --- Total = 486969, new = 623
Active Entries --- Total = 482874, new = 633
Hold Entries --- Total = 4095, new = -10
Fwd flow Entries - Total = 482874
drop flow Entries - Total = 0
NAT flow Entries - Total = 0
Rate of change of Active Entries
---
current rate = 894
Avg setup rate = 3475
Avg teardown rate = 0
Rate of change of Flow Entries
---
current rate = 879
root@JunCom00:~# date ; dropstats | grep -v " 0"
Tue Jun 20 19:26:57 JST 2017
Flow Unusable 3336327
Flow Table Full 349639
Flow Action Drop 1
Discards 218761
Cloned Original 18
=======
*Build57 default
root@JunCom01:~# vrouter --info
vRouter module version 2.21.3 (Built by contrail-
Interfaces limit 4352
VRF tables limit 4096
NextHops limit 65536
MPLS Labels limit 11520
Bridge Table limit 262144
Bridge Table Overflow limit 4096
Flow Table limit 524288
Flow Table overflow limit 8192
Mirror entries limit 255
root@JunCom01:~# cat /etc/modprobe.
options vrouter vr_mpls_
2017-06-20 19:12:55 +0900
Flow Statistics
---------------
Total Entries --- Total = 270210, new = -4
Active Entries --- Total = 266113, new = -4
Hold Entries --- Total = 4097, new = 0
Fwd flow Entries - Total = 266096
drop flow Entries - Total = 17
NAT flow Entries - Total = 0
Rate of change of Active Entries
---
current rate = -6
Avg setup rate = 5784
Avg teardown rate = 0
Rate of change of Flow Entries
---
current rate = -6
root@JunCom01:~# date ; dropstats | grep -v " 0"
Tue Jun 20 19:13:05 JST 2017
Flow Unusable 1697824
Flow Table Full 1886
Flow Action Drop 120
Discards 122792
Cloned Original 193
root@JunCom01:~# date ; dropstats | grep -v " 0"
Tue Jun 20 19:13:05 JST 2017
Flow Unusable 1713891
Flow Table Full 1894
Flow Action Drop 120
Discards 122818
Cloned Original 193
=======
*Build57 vrouter.conf was changed.
root@JunCom01:~# vrouter --info
vRouter module version 2.21.3 (Built by contrail-
Interfaces limit 4352
VRF tables limit 4096
NextHops limit 65536
MPLS Labels limit 11520
Bridge Table limit 262144
Bridge Table Overflow limit 4096
Flow Table limit 1048576 <<<<<< 2 times
Flow Table overflow limit 8192
Mirror entries limit 255
root@JunCom01:~# cat /etc/modprobe.
options vrouter vr_mpls_
2017-06-20 18:57:05 +0900
Flow Statistics
---------------
Total Entries --- Total = 437197, new = 33
Active Entries --- Total = 433106, new = 39
Hold Entries --- Total = 4091, new = -6
Fwd flow Entries - Total = 433059
drop flow Entries - Total = 47
NAT flow Entries - Total = 0
Rate of change of Active Entries
---
current rate = 51
Avg setup rate = 5538
Avg teardown rate = 3047
Rate of change of Flow Entries
---
current rate = 43
root@JunCom01:~# date ; dropstats | grep -v " 0"
Tue Jun 20 18:57:08 JST 2017
Flow Unusable 17615548
Flow Table Full 4507
Flow Action Drop 85704
Discards 827012
Cloned Original 3099546
root@JunCom01:~# date ; dropstats | grep -v " 0"
Tue Jun 20 18:57:08 JST 2017
Flow Unusable 17628874
Flow Table Full 4511
Flow Action Drop 85704
Discards 827068
Cloned Original 3099546
=======
They changed the file(/etc/
The upper limit of flow entry has risen to around 430K.
If they change this value a many times, half of the set the value looks like an actual one.
They needs answers on below points.
1. They do not want to reboot the server to recover the problem. They need workaround of this issue.
2. What is the root cause of this issue.
3. Anything changed by any other parameter? Especially, since max_vm_flows is a parameter used by customers, it is interested to know whether it has impact or not.
-Regards,
Mehul Patel
tags: | added: vrouter |
Changed in juniperopenstack: | |
importance: | Undecided → Critical |
importance: | Critical → High |
assignee: | nobody → Hari Prasad Killi (haripk) |
milestone: | none → r2.21 |
Hi Team,
This issue is occurred in production environment, so please this on very high priority.
-Regards,
Mehul Patel