VM can't ping each other in DPDK scenario with nic x710

Bug #1787148 reported by James Ren
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Juniper Openstack
New
Undecided
Unassigned

Bug Description

Setup:
contrail 5.0 + DPDK + openstack
NIC: X710
DPDK_UIO_DRIVER: igb_uio

VM can obtain the ip address, but can't ping each other.

After debugging, found that vhost0 can't be seen from vrouter agent container. It can be seen outside the container.

(vrouter-agent)[root@cp1 /]$ vif -l
Vrouter Interface Table

Flags: P=Policy, X=Cross Connect, S=Service Chain, Mr=Receive Mirror
       Mt=Transmit Mirror, Tc=Transmit Checksum Offload, L3=Layer 3, L2=Layer 2
       D=DHCP, Vp=Vhost Physical, Pr=Promiscuous, Vnt=Native Vlan Tagged
       Mnp=No MAC Proxy, Dpdk=DPDK PMD Interface, Rfl=Receive Filtering Offload, Mon=Interface is Monitored
       Uuf=Unknown Unicast Flood, Vof=VLAN insert/strip offload, Df=Drop New Flows, L=MAC Learning Enabled
       Proxy=MAC Requests Proxied Always, Er=Etree Root, Mn=Mirror without Vlan Tag, Ig=Igmp Trap Enabled

vif0/2 Socket: unix
            Type:Agent HWaddr:00:00:5e:00:01:00 IPaddr:0.0.0.0
            Vrf:65535 Mcast Vrf:65535 Flags:L3Er QOS:-1 Ref:3
            RX port packets:14851 errors:0
            RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
            RX packets:14851 bytes:1286820 errors:11501
            TX packets:2830 bytes:256025 errors:0
            Drops:11501

vif0/3 PMD: tap6fa98856-60
            Type:Virtual HWaddr:00:00:5e:00:01:00 IPaddr:1.0.0.3
            Vrf:2 Mcast Vrf:2 Flags:PL3L2DEr QOS:-1 Ref:24
            RX port packets:498 errors:0 syscalls:1
            RX queue packets:35 errors:0
            RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
            RX packets:498 bytes:23228 errors:0
            TX packets:567 bytes:24524 errors:0
            ISID: 0 Bmac: 02:6f:a9:88:56:60
            Drops:38
            TX port packets:517 errors:50 syscalls:514

vif0/4 PMD: tap7fd231ec-13
            Type:Virtual HWaddr:00:00:5e:00:01:00 IPaddr:2.0.0.3
            Vrf:3 Mcast Vrf:3 Flags:PL3L2DEr QOS:-1 Ref:24
            RX port packets:572 errors:0 syscalls:1
            RX queue packets:97 errors:0
            RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
            RX packets:572 bytes:28869 errors:0
            TX packets:592 bytes:28690 errors:0
            ISID: 0 Bmac: 02:7f:d2:31:ec:13
            Drops:84
            TX port packets:542 errors:50 syscalls:538

(vrouter-agent)[root@cp1 /]$ exit
exit
[root@cp1 vrouter]# ifconfig vhost0
vhost0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
        inet 192.168.10.80 netmask 255.255.255.0 broadcast 192.168.10.255
        inet6 fe80::6a05:caff:fe6b:b015 prefixlen 64 scopeid 0x20<link>
        ether 68:05:ca:6b:b0:15 txqueuelen 1000 (Ethernet)
        RX packets 13389 bytes 1198769 (1.1 MiB)
        RX errors 0 dropped 0 overruns 0 frame 0
        TX packets 13650 bytes 1001262 (977.7 KiB)
        TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

It seems it failed to add vif0 from the log of vrouter dpdk agent.

[root@cp1 vrouter]# docker logs 42
INFO: dpdk started
vm.nr_hugepages = 12400
vm.max_map_count = 128960
net.ipv4.tcp_keepalive_time = 5
net.ipv4.tcp_keepalive_probes = 5
net.ipv4.tcp_keepalive_intvl = 1
net.core.wmem_max = 9160000
INFO: load rte_kni kernel module
modprobe: FATAL: Module rte_kni not found.
ERROR: failed to load rte_kni driver
WARNING: rte_ini kernel module is unavailable. Please install/insert it for Ubuntu 14.04 manually.
WARNING: binding information is already saved
INFO: Physical interface: enp47s0f1, mac=68:05:ca:6b:b0:15, pci=0000:2f:00.1
INFO: start '/bin/taskset -c 2-9 /usr/bin/contrail-vrouter-dpdk --no-daemon --socket-mem 1024,1024'
INFO: init vhost0... 1
2018-08-15 02:51:43,172 VROUTER: vRouter version: {"build-info": [{"build-time": "2018-08-09 17:42:27.900727", "build-hostname": "cc1f6bd48365", "build-user": "root", "build-version": "5.0.1"}]}
2018-08-15 02:51:43,172 VROUTER: DPDK version: DPDK 17.11.3
INFO: vhost0 is already up
2018-08-15 02:51:43,190 VROUTER: Log file : /var/log/contrail/contrail-vrouter-dpdk.log
2018-08-15 02:51:43,190 VROUTER: Bridge Table limit: 262144
2018-08-15 02:51:43,190 VROUTER: Bridge Table overflow limit: 53248
2018-08-15 02:51:43,190 VROUTER: Flow Table limit: 524288
2018-08-15 02:51:43,190 VROUTER: Flow Table overflow limit: 105472
2018-08-15 02:51:43,190 VROUTER: MPLS labels limit: 5120
2018-08-15 02:51:43,190 VROUTER: Nexthops limit: 65536
2018-08-15 02:51:43,190 VROUTER: VRF tables limit: 4096
2018-08-15 02:51:43,190 VROUTER: Packet pool size: 16383
2018-08-15 02:51:43,190 VROUTER: PMD Tx Descriptor size: 128
2018-08-15 02:51:43,190 VROUTER: PMD Rx Descriptor size: 128
2018-08-15 02:51:43,190 VROUTER: Maximum packet size: 9216
2018-08-15 02:51:43,190 VROUTER: EAL arguments:
2018-08-15 02:51:43,190 VROUTER: -n "4"
2018-08-15 02:51:43,190 VROUTER: --socket-mem "1024,1024"
2018-08-15 02:51:43,190 VROUTER: --lcores "(0-2)@(0-39),(8-9)@(0-39),10@2,11@3,12@4,13@5,14@6,15@7,16@8,17@9"
2018-08-15 02:51:43,192 EAL: Detected 40 lcore(s)
2018-08-15 02:51:43,235 EAL: No free hugepages reported in hugepages-1048576kB
2018-08-15 02:51:43,245 EAL: Probing VFIO support...
2018-08-15 02:51:51,810 EAL: PCI device 0000:2f:00.0 on NUMA socket 0
2018-08-15 02:51:51,810 EAL: probe driver: 8086:1572 net_i40e
2018-08-15 02:51:51,810 EAL: PCI device 0000:2f:00.1 on NUMA socket 0
2018-08-15 02:51:51,810 EAL: probe driver: 8086:1572 net_i40e
2018-08-15 02:51:51,810 EAL: PCI device 0000:5a:00.2 on NUMA socket 0
2018-08-15 02:51:51,810 EAL: probe driver: 8086:37d1 net_i40e
2018-08-15 02:51:51,810 EAL: PCI device 0000:5a:00.3 on NUMA socket 0
2018-08-15 02:51:51,810 EAL: probe driver: 8086:37d1 net_i40e
2018-08-15 02:51:51,817 VROUTER: Found 0 eth device(s)
2018-08-15 02:51:51,817 VROUTER: Using 8 forwarding lcore(s)
2018-08-15 02:51:51,817 VROUTER: Using 0 IO lcore(s)
2018-08-15 02:51:51,817 VROUTER: Using 5 service lcores
2018-08-15 02:51:51,817 VROUTER: Max HOLD flow entries set to 1000
2018-08-15 02:51:51,817 VROUTER: set fd limit to 4864 (prev 65536, max 65536)
2018-08-15 02:51:51,829 VROUTER: Starting NetLink...
2018-08-15 02:51:51,830 VROUTER: Lcore 10: distributing MPLSoGRE packets to [11,12,13,14,15,16,17]
2018-08-15 02:51:51,830 VROUTER: Lcore 13: distributing MPLSoGRE packets to [10,11,12,14,15,16,17]
2018-08-15 02:51:51,830 VROUTER: Lcore 15: distributing MPLSoGRE packets to [10,11,12,13,14,16,17]
2018-08-15 02:51:51,830 VROUTER: Lcore 16: distributing MPLSoGRE packets to [10,11,12,13,14,15,17]
2018-08-15 02:51:51,830 UVHOST: Starting uvhost server...
2018-08-15 02:51:51,831 USOCK: usock_alloc[7fcb042c5700]: new socket FD 58
UVHOST: server event FD is 59
2018-08-15 02:51:51,831 VROUTER: Lcore 12: distributing MPLSoGRE packets to [10,11,13,14,15,16,17]
USOCK: usock_alloc[7fcb042c5700]: setting socket FD 58 send buff size.
Buffer size set to 18320000 (requested 9216000)
VROUTER: Lcore 11: distributing MPLSoGRE packets to [10,12,13,14,15,16,17]
2018-08-15 02:51:51,831 VROUTER: Lcore 17: distributing MPLSoGRE packets to [10,11,12,13,14,15,16]
2018-08-15 02:51:51,831 VROUTER: Lcore 14: distributing MPLSoGRE packets to [10,11,12,13,15,16,17]
2018-08-15 02:51:51,831 UVHOST: server socket FD is 60
2018-08-15 02:51:51,831 VROUTER: NetLink TCP socket FD is 58
2018-08-15 02:51:51,831 VROUTER: uvhost Unix socket FD is 61
2018-08-15 02:51:51,831 UVHOST: Handling connection FD 60...
2018-08-15 02:51:51,831 UVHOST: FD 60 accepted new NetLink connection FD 62
2018-08-15 02:51:52,369 DPCORE: vrouter soft reset start
2018-08-15 02:51:52,395 DPCORE: vrouter soft reset done (0)
>> 2018-08-15 02:51:52,417 VROUTER: Error adding vif 0 eth device enp47s0f1: no port ID found for PCI 0000:2f:00.1
>> 2018-08-15 02:51:52,418 VROUTER: Deleting vif 0 eth device
>> 2018-08-15 02:51:52,418 VROUTER: error deleting eth dev: already removed
2018-08-15 02:51:52,418 VROUTER: Adding vif 2 (gen. 2) packet device unix
2018-08-15 02:51:52,418 USOCK: usock_alloc[7fcb042c5700]: new socket FD 65
2018-08-15 02:51:52,418 USOCK: usock_alloc[7fcb042c5700]: setting socket FD 65 send buff size.
Buffer size set to 18320000 (requested 9216000)

James Ren (jamesvren)
information type: Proprietary → Public
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.