Adding physical network connectivity (provider network)

As an example, let’s assume that you would want to run your web servers in Openstack to leverage LBaaS and autoscalling.  But that for some reasons, you would need these web servers to have access to virtual machines or physical servers running on a physical network.  Here are some instructions to do that.

Provider network

In this example, the public network (192.168.5.0/24) is the network from which my users will access my web application.   A floating IP will be mapped to a load-balancing pool VIP to distribute the load between my web servers.  The private network (10.10.10.0/24) is a VXLAN virtual network inside Openstack Neutron.  But the prov1 network is another physical network where my database server(s) will reside.  Another use-case could be for getting access to another data center or a remote office from my internal Openstack virtual network.

Step 1 – Create a new bridge on your controller node(s) and add a physical interface to this bridge.

I created a new bridge called br-prov1:

# cat /etc/sysconfig/network-scripts/ifcfg-br-prov1 
ONBOOT=yes
IPADDR="192.168.3.100"
PREFIX="24"
DEVICE=br-prov1
DEVICETYPE=ovs
OVSBOOTPROTO=none
TYPE=OVSBridge

And attached my physical interface (eth1) to this new bridge:

# cat /etc/sysconfig/network-scripts/ifcfg-eth1 
DEVICE=eth1
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-prov1
ONBOOT=yes
BOOTPROTO=none

Step 2 – Create a new physnet device in OVS

Update “bridge_mappings” setting in the following file with your new physnet:
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

bridge_mappings=physnet1:br-ex,physnet2:br-prov1

Step 3 – Finally, create a new flat network (flat driver must be enabled in /etc/neutron/plugin.ini)

In this example, I am creating a shared flat network under my services project.  This way, all projects will have access to this physical network.  That said, you could also create this network under a specific project.

 

neutron net-create

Once this is completed, don’t forget to setup your subnet (providing all subnet configuration details) and attach it to your router as per the diagram at the top of this article.

Step 4 – Routing back

Don’t forget to add a route back from your backend provider network.  In my example, all traffic is routed by my virtual router through 192.168.3.254.   So my backend servers needed the following route back:  10.10.10.0/24 => 192.168.3.254.

 

* Please note that following these instructions will not allow you to boot openstack instances on this provider network.  Additional steps are required to do so.  All compute nodes would require access to this br-prov1 bridge, metadata server access would be required from this network to receive cloud-init info and finally, IP management would not be managed my openstack dhcp service anymore.  I’ll try to create another article when possible to provide mode details about this.

 

Network performance while using VXLAN or GRE

 

While using VXLAN, a header is added to all packets sent by your instances.  This information is required by this protocol for VXLAN to do his magic.  Same thing applies to GRE.  This can cause fragmentation (poor network performance) as your packets will be bigger than the default MTU of 1500.

To fix that problem, you can make your instances default interface MTU setting to 1400 instead of 1500.  This way, VXLAN header will not cause fragmentation anymore.  Here is how to do that:

This is achieved by dhcp server sending out 1400 MTU to instances as a dhcp option.

* Create file /etc/neutron/dnsmasq-neutron.conf with below content in it.

dhcp-option-force=26,1400

* Edit /etc/neutron/dhcp_agent.ini and add below.

dnsmasq_config_file=/etc/neutron/dnsmasq-neutron.conf

* Then kill all existing dnsmasq processes and restart dhcp-agent. Or reboot the network node.

# service neutron-dhcp-agent restart

Spin-up a new instance or reboot an existing instance. Verify that your network interface MTU setting is now 1400.

How to add multiple external networks for floating ips in Openstack

Using these latest packages, it’s possible to create multiple provider external networks on the same l3-agent node. Please follow below steps.
Assuming communication to first external network is via eth0 and second external network via eth1, you should have two external bridges configured and interfaces added to them upfront.

 # ovs-vsctl add-br br-ex
 # ovs-vsctl add-port br-ex eth0
 # ovs-vsctl add-br br-ex1
 # ovs-vsctl add-port br-ex1 eth1

If there are more than two external networks, create additional bridges, then add port associated with that external network to the bridge. It’s also possible to use vlan tagged interfaces to connect multiple external networks like eth0.100, eth0.101, eth0.102 and etc.
Then configure two physical networks in /etc/neutron/plugin.ini and map bridges accordingly.

 network_vlan_ranges = physnet1,physnet2
 bridge_mappings = physnet1:br-ex,physnet2:br-ex1
 Set external_network_bridge = to an empty value in /etc/neutron/l3-agent.ini
 # Name of bridge used for external network traffic. This should be set to
 # empty value for the linux bridge
 external_network_bridge =

This is required to use provider external networks, not bridge based external network where we will add external_network_bridge = br-ex
Create multiple external networks as flat networks and associate them correctly the configured physical_network.

 # neutron net-create public01 --provider:network_type flat --provider:physical_network physnet1 --router:external=True
 # neutron net-create public02 --provider:network_type flat --provider:physical_network physnet2 --router:external=True

Create subnets appropriately for each external network.
You will be able to set any external network from above as gateway for a virtual router and assign floating ips from that network to instances from the private networks connected to that router.
Note that if ml2 is used, the above parameters are still valid for plugin.ini. Additionally, one need to configure below parameters as well while ml2 is used.

 type_drivers = flat  #add flat to the existing list.
 flat_networks = physnet1,physnet2

While using provider external networks, traffic to/from external network flows through br-int. br-int and br-ex will be connected using veth pair int-br-ex and phy-br-ex. br-int and br-ex1 will be connected using veth pair int-br-ex1 and phy-br-ex1. This will be automatically created by neutron-openvswitch-agent based on the bridge_mappings configured earlier.
Below diagram show packet flow for multiple external network via br-int on network node. Note I have excluded packet flow to private tenant networks using gre or vxlan or vlan from the diagram deliberately to make it simple.

openstack-ex-br

Modifying an instance network interface configuration

Here is how you can migrate an instance to a new network or add a secondary interface to an existing instance.

Find the instance name that you want to modify:

nova list

List all interfaces attached to an instance:

nova interface-list <server>

Detach an interface from your instance:

nova interface-detach <server> <port_id> 

Find the netowork ID of where you would like to attach your new interface:

nova net-list

Attach a new network interface to your network ID

nova interface-attach --net-id <net_id> <server>

Voila!