As an example, let’s assume that you would want to run your web servers in Openstack to leverage LBaaS and autoscalling. But that for some reasons, you would need these web servers to have access to virtual machines or physical servers running on a physical network. Here are some instructions to do that.
In this example, the public network (192.168.5.0/24) is the network from which my users will access my web application. A floating IP will be mapped to a load-balancing pool VIP to distribute the load between my web servers. The private network (10.10.10.0/24) is a VXLAN virtual network inside Openstack Neutron. But the prov1 network is another physical network where my database server(s) will reside. Another use-case could be for getting access to another data center or a remote office from my internal Openstack virtual network.
Step 1 – Create a new bridge on your controller node(s) and add a physical interface to this bridge.
I created a new bridge called br-prov1:
# cat /etc/sysconfig/network-scripts/ifcfg-br-prov1 ONBOOT=yes IPADDR="192.168.3.100" PREFIX="24" DEVICE=br-prov1 DEVICETYPE=ovs OVSBOOTPROTO=none TYPE=OVSBridge
And attached my physical interface (eth1) to this new bridge:
# cat /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth1 DEVICETYPE=ovs TYPE=OVSPort OVS_BRIDGE=br-prov1 ONBOOT=yes BOOTPROTO=none
Step 2 – Create a new physnet device in OVS
Update “bridge_mappings” setting in the following file with your new physnet:
Step 3 – Finally, create a new flat network (flat driver must be enabled in /etc/neutron/plugin.ini)
In this example, I am creating a shared flat network under my services project. This way, all projects will have access to this physical network. That said, you could also create this network under a specific project.
Once this is completed, don’t forget to setup your subnet (providing all subnet configuration details) and attach it to your router as per the diagram at the top of this article.
Step 4 – Routing back
Don’t forget to add a route back from your backend provider network. In my example, all traffic is routed by my virtual router through 192.168.3.254. So my backend servers needed the following route back: 10.10.10.0/24 => 192.168.3.254.
* Please note that following these instructions will not allow you to boot openstack instances on this provider network. Additional steps are required to do so. All compute nodes would require access to this br-prov1 bridge, metadata server access would be required from this network to receive cloud-init info and finally, IP management would not be managed my openstack dhcp service anymore. I’ll try to create another article when possible to provide mode details about this.