Due to wrong MTU setting network is unstable if a VM on DPDK private network is booted on image other than Cirros. MTU does not take into account 4 bytes VLAN tag.
Steps to reproduce:
1. Deploy an env with Private network placed on a DPDK-enabled interface on a compute node
2. Create Ubuntu Cloud image
3. Set hw:mem_page_size=2048 metadata option for m1.smal flavor
4. Boot a VM from Ubuntu Cloud image on the compute with DPDK-enabled interface
5. When the VM is fully booted, ping it from controller and try to log into it via ssh
Expected result:
Both ping and ssh works
Actual result:
Ping works well, but ssh connection is closed after timeout.
Workaround:
On controller:
NS=$(ip netns list | grep qdhcp)
ip netns exec $NS ifconfig
DEV=$(ifconfig | grep tap | awk '{print $1}')
ip netns exec $NS i plink set mtu 1496 dev $DEV
Detailed description
Due to wrong MTU setting network is unstable if a VM on DPDK private network is booted on image other than Cirros. MTU does not take into account 4 bytes VLAN tag.
#ps aux | grep dnsmasq | grep -o -P 'option:mtu[^ ]+'
option:mtu,1500
Steps to reproduce: page_size= 2048 metadata option for m1.smal flavor
1. Deploy an env with Private network placed on a DPDK-enabled interface on a compute node
2. Create Ubuntu Cloud image
3. Set hw:mem_
4. Boot a VM from Ubuntu Cloud image on the compute with DPDK-enabled interface
5. When the VM is fully booted, ping it from controller and try to log into it via ssh
Expected result:
Both ping and ssh works
Actual result:
Ping works well, but ssh connection is closed after timeout.
Workaround:
On controller:
NS=$(ip netns list | grep qdhcp)
ip netns exec $NS ifconfig
DEV=$(ifconfig | grep tap | awk '{print $1}')
ip netns exec $NS i plink set mtu 1496 dev $DEV
On VM:
ip link set mtu 1496 dev eth0