That was probably a key thing, so now i'm in a slightly different situation but I think I need to sort out the configs to get it right.
Once I added the router id to the ini and restarted the l3_agent, it created the qg- and qr- interfaces and I can now ping the floating IP from the controller node.
Unfortunately it also changed my routing table, adding a new gateway (10.2.1.201) which knocked my controller off the public network.
Luckily I can still access it via the internal bridges from the compute node, I also have IPMI as a worse case.
qg-acda11d9-dd Link encap:Ethernet
inet addr:10.2.1.202 Bcast:10.2.1.207 Mask:255.255.255.248
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
qr-792fef06-66 Link encap:Ethernet
inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
I have IP's on br-ex and br-int, should I remove those?
br-int Link encap:Ethernet
inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0
br-ex Link encap:Ethernet
inet addr:10.2.1.201 Bcast:10.250.1.207 Mask:255.255.255.248
I also have an public interface, eth0, that is how I normally connect to the server remotely.
eth0 Link encap:Ethernet
inet addr:10.2.1.175 Bcast:10.250.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Ideally I would like:
eth0 to be the default gateway interface for the system.
br-ex with eth3 port used just for 10.2.1.200/29 (instance VM NAT traffic)
br-int with eth1 port used for 10.0.0.0/24 (instance VM traffic)
br-img with eth2 port used for 10.0.1.0/24 (openstack management traffic)
I can delete the added gateway and get traffic flowing through the server again:
route del -net 0.0.0.0 gw 10.2.1.201
I can ping the floating ip and gateway ( 10.2.1.201 ) from the VM, but still not able to get past that, can't ping the next hop gateway ( 10.2.1.2 ).
Hi Gary,
That was probably a key thing, so now i'm in a slightly different situation but I think I need to sort out the configs to get it right.
Once I added the router id to the ini and restarted the l3_agent, it created the qg- and qr- interfaces and I can now ping the floating IP from the controller node.
Unfortunately it also changed my routing table, adding a new gateway (10.2.1.201) which knocked my controller off the public network.
Luckily I can still access it via the internal bridges from the compute node, I also have IPMI as a worse case.
qg-acda11d9-dd Link encap:Ethernet 255.255. 248
inet addr:10.2.1.202 Bcast:10.2.1.207 Mask:255.
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
qr-792fef06-66 Link encap:Ethernet
inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
I have IP's on br-ex and br-int, should I remove those?
br-int Link encap:Ethernet
inet addr:10.0.0.1 Bcast:10.0.0.255 Mask:255.255.255.0
br-ex Link encap:Ethernet 255.255. 248
inet addr:10.2.1.201 Bcast:10.250.1.207 Mask:255.
I also have an public interface, eth0, that is how I normally connect to the server remotely.
eth0 Link encap:Ethernet
inet addr:10.2.1.175 Bcast:10.250.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
Ideally I would like:
eth0 to be the default gateway interface for the system.
br-ex with eth3 port used just for 10.2.1.200/29 (instance VM NAT traffic)
br-int with eth1 port used for 10.0.0.0/24 (instance VM traffic)
br-img with eth2 port used for 10.0.1.0/24 (openstack management traffic)
I can delete the added gateway and get traffic flowing through the server again:
route del -net 0.0.0.0 gw 10.2.1.201
I can ping the floating ip and gateway ( 10.2.1.201 ) from the VM, but still not able to get past that, can't ping the next hop gateway ( 10.2.1.2 ).
The system default gateway is 10.2.1.2