Adding physical network connectivity (provider network)

As an example, let’s assume that you would want to run your web servers in Openstack to leverage LBaaS and autoscalling.  But that for some reasons, you would need these web servers to have access to virtual machines or physical servers running on a physical network.  Here are some instructions to do that.

Provider network

In this example, the public network (192.168.5.0/24) is the network from which my users will access my web application.   A floating IP will be mapped to a load-balancing pool VIP to distribute the load between my web servers.  The private network (10.10.10.0/24) is a VXLAN virtual network inside Openstack Neutron.  But the prov1 network is another physical network where my database server(s) will reside.  Another use-case could be for getting access to another data center or a remote office from my internal Openstack virtual network.

Step 1 – Create a new bridge on your controller node(s) and add a physical interface to this bridge.

I created a new bridge called br-prov1:

# cat /etc/sysconfig/network-scripts/ifcfg-br-prov1 
ONBOOT=yes
IPADDR="192.168.3.100"
PREFIX="24"
DEVICE=br-prov1
DEVICETYPE=ovs
OVSBOOTPROTO=none
TYPE=OVSBridge

And attached my physical interface (eth1) to this new bridge:

# cat /etc/sysconfig/network-scripts/ifcfg-eth1 
DEVICE=eth1
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-prov1
ONBOOT=yes
BOOTPROTO=none

Step 2 – Create a new physnet device in OVS

Update “bridge_mappings” setting in the following file with your new physnet:
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

bridge_mappings=physnet1:br-ex,physnet2:br-prov1

Step 3 – Finally, create a new flat network (flat driver must be enabled in /etc/neutron/plugin.ini)

In this example, I am creating a shared flat network under my services project.  This way, all projects will have access to this physical network.  That said, you could also create this network under a specific project.

 

neutron net-create

Once this is completed, don’t forget to setup your subnet (providing all subnet configuration details) and attach it to your router as per the diagram at the top of this article.

Step 4 – Routing back

Don’t forget to add a route back from your backend provider network.  In my example, all traffic is routed by my virtual router through 192.168.3.254.   So my backend servers needed the following route back:  10.10.10.0/24 => 192.168.3.254.

 

* Please note that following these instructions will not allow you to boot openstack instances on this provider network.  Additional steps are required to do so.  All compute nodes would require access to this br-prov1 bridge, metadata server access would be required from this network to receive cloud-init info and finally, IP management would not be managed my openstack dhcp service anymore.  I’ll try to create another article when possible to provide mode details about this.

 

How to add multiple external networks for floating ips in Openstack

Using these latest packages, it’s possible to create multiple provider external networks on the same l3-agent node. Please follow below steps.
Assuming communication to first external network is via eth0 and second external network via eth1, you should have two external bridges configured and interfaces added to them upfront.

 # ovs-vsctl add-br br-ex
 # ovs-vsctl add-port br-ex eth0
 # ovs-vsctl add-br br-ex1
 # ovs-vsctl add-port br-ex1 eth1

If there are more than two external networks, create additional bridges, then add port associated with that external network to the bridge. It’s also possible to use vlan tagged interfaces to connect multiple external networks like eth0.100, eth0.101, eth0.102 and etc.
Then configure two physical networks in /etc/neutron/plugin.ini and map bridges accordingly.

 network_vlan_ranges = physnet1,physnet2
 bridge_mappings = physnet1:br-ex,physnet2:br-ex1
 Set external_network_bridge = to an empty value in /etc/neutron/l3-agent.ini
 # Name of bridge used for external network traffic. This should be set to
 # empty value for the linux bridge
 external_network_bridge =

This is required to use provider external networks, not bridge based external network where we will add external_network_bridge = br-ex
Create multiple external networks as flat networks and associate them correctly the configured physical_network.

 # neutron net-create public01 --provider:network_type flat --provider:physical_network physnet1 --router:external=True
 # neutron net-create public02 --provider:network_type flat --provider:physical_network physnet2 --router:external=True

Create subnets appropriately for each external network.
You will be able to set any external network from above as gateway for a virtual router and assign floating ips from that network to instances from the private networks connected to that router.
Note that if ml2 is used, the above parameters are still valid for plugin.ini. Additionally, one need to configure below parameters as well while ml2 is used.

 type_drivers = flat  #add flat to the existing list.
 flat_networks = physnet1,physnet2

While using provider external networks, traffic to/from external network flows through br-int. br-int and br-ex will be connected using veth pair int-br-ex and phy-br-ex. br-int and br-ex1 will be connected using veth pair int-br-ex1 and phy-br-ex1. This will be automatically created by neutron-openvswitch-agent based on the bridge_mappings configured earlier.
Below diagram show packet flow for multiple external network via br-int on network node. Note I have excluded packet flow to private tenant networks using gre or vxlan or vlan from the diagram deliberately to make it simple.

openstack-ex-br

Executing troubleshooting commands inside a namespace

Here is how you can execute some troubleshooting commands as if you were running them on a virtual router inside Neutron.

[root@serverX ~(keystone_admin)]# neutron router-list
[root@serverX ~(keystone_admin)]# ip netns
[root@serverX ~(keystone_admin)]# ip netns exec qrouter-10bf634d-3228-4041-8f3a-4d5e0e603c07 ping
8.8.8.8
[root@serverX ~(keystone_admin)]# ip netns exec qrouter-10bf634d-3228-4041-8f3a-4d5e0e603c07 netstat
-rn

*  Kernel namespaces are used to secure virtual networks from each other.    This is great from a security perspective but sometimes makes troubleshooting harder.