How to install Red Hat Openstack 9 with Packstack

Introduction

Here is an updated version of how to install Red Hat Openstack 9 (Mitaka) with Packstack and perform some common basic configuration changes. The same procedure could be used with RDO if you don’t have subscriptions for Red Hat Openstack. Obviously, the main differences would be that you would use CentOS instead of RHEL, and configure different repos.

Packstack is a simple tool to quickly install Red Hat Openstack in an non-HA configuration.   Where OSP Director would be the appropriate tool for a full HA environment with complete lifecycle capabilities.

Pre-requisites before staring the install:

  • Basic RHEL7 installation
  • Valid Openstack subscription attached to this server
  • Following repos:
    • rhel-7-server-rpms
    • rhel-7-server-extras-rpms
    • rhel-7-server-rh-common-rpms
    • rhel-7-server-optional-rpms
    • rhel-7-openstack-9-tools-rpms
    • rhel-7-openstack-9-rpms
    • rhel-7-openstack-9-optools-rpms
# ATTACHING SUBS AND REPOS
subscription-manager register --username='your_username' --password='your_password'
subscription-manager attach --pool=your_pool_id
subscription-manager repos --disable=*
subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-openstack-9-rpms --enable=rhel-7-server-optional-rpms --enable=rhel-7-server-rh-common-rpms --enable=rhel-7-server-openstack-9-optools-rpms --enable=rhel-7-server-openstack-9-tools-rpms

# FOR RDO ON CENTOS, YOU CAN FIND HOW TO CONFIGURE REPOS HERE: https://www.rdoproject.org/install/quickstart/

  • Stop and Disable SELINUX
  • Stop and Disable Firewalld
  • Stop and Disable NetworkManager
  • Make sure you have static IPs on your NICs, not DHCP based.
  • Make sure Hostname and DNS are setup appropriately.  Your hostname should resolve.  Put it in /etc/hosts if required.
  • Update to latest packages (yum -y update) and reboot.

Installing packstack

yum install -y openstack-packstack

Once packstack is installed, you can simply do “packstack –all-in-one” as described in most instructions you will find online.  This is just the simplest way to install with packstack, using default configuration for everything.  That said, in my example here, I want to make some changes to the default config.   So here is usually what I will do:

packstack --gen-answer-file=/root/answers.txt

This will create an answer file with all configuration settings.   I can now change all the settings I want in this file, and then launch packstack again using this file as configuration input.

Here are all the changes I will do in my answer file:

# turning "ON" heat module.  Heat is great! :-)
CONFIG_HEAT_INSTALL = y

# also turning "ON" LBaaS.  
CONFIG_LBAAS_INSTALL = y

# By default, Neutron only allows vxlan networks.
# But I want "flat" and "local" drivers to be there as well.
# (will be required later to create new networks)
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan,flat,local

# These lines will automatically create an OVS bridge for me.
# This is required by Neutron to get external access.
# Replace "eth0" by the name of your interface
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=ex-net:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eth0

# By default packstack install some demo stuff.  
# Turning this OFF.
CONFIG_PROVISION_DEMO=n

The “flat” driver and the OVS bridge configuration is usually where people struggle at first.  Regardless of how we will configure our external networks later (and I’ll talk of tenant vs provider networks later), we will need to create a “flat” network to get outside of Openstack.  VXLAN packets would have no where to go on my network.  Also, an OVS bridge must exist for Neutron to do his job.   The “BRIDGE_MAPPING” and “BRIDGE_IFACES” configuration lines will take care of this automatically for you.

Installing Openstack

Ok, now that packstack is installed and we have an answer file.  Let’s start the installation:

packstack --answer-file=/root/answers.txt

This could take some time to complete based on how fast is your network and access to the RPMS.

Finalizing the configuration of Openstack

You can first check that all openstack services are running:

openstack-service status

You should also see a KeystoneRC file in your root folder:  /root/keystonerc_admin

Source this file to get access to your Openstack cloud:

source /root/keystonerc_admin

At this point, I usually create myself an account with my own password.   *this is an optional step:

openstack user create --project admin --password <password> <username>
openstack role add --user <username> --project admin admin

This way, you can log into Horizon (web portal) using your own credentials instead of using the admin account automatically generated password.

You could also create a new KeystoneRC file with your own credentials for when you are in command line.  Again, optional step!

At least one image in Glance will be required before we can launch anything.  Let’s just upload a small Cirros image for now (ideal for testing as it’s tiny and fast!).

curl -o /root/cirros.img http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
glance image-create --name='cirros' --container-format=bare --disk-format=qcow2 < /root/cirros.img

Finally, Before you can launch instances (VMs), you must finalize your tenant network configuration:  create at least one network for VMs to boot on.  There is two very common ways to deal with networks in Openstack:  tenant networks vs provider networks.

When using tenant networks, your instances would boot up on a virtual network (usually based on VXLAN) and would require to go trough a virtual Openstack router to get access to the outside world.   To allow access from the outside to the internal virtual network, you would create floating IPs on your virtual router.  The is the most classic way to deploy Openstack.

When using provider networks, you can allow your instances to boot up directly on the external (or public) network.  No need for virtual networks, no need for floating IPs.    This is a much easier way to deal with networks, can provide better performance (no VXLAN encapsulation), but obviously doesn’t scale as much from an IP allocation perspective as each instance will require at least one public IP.

Configuring tenant networks

I first need to create an external (public) network where floating IPs will be assigned:

neutron net-create public --router:external True --provider:physical_network ex-net --provider:network_type flat --tenant-id services

neutron subnet-create public 172.16.185.0/24 --name public --allocation-pool start=172.16.185.10,end=172.16.185.100 --disable-dhcp --gateway=172.16.185.2

A few things to note here:  

  • This will only work if I have allowed “flat” network drivers in my answers.txt file.  If I haven’t done that, I will need to update neutron manually to allow “flat” driver and restart neutron:  vi /etc/neutron/plugin.ini;  openstack-service restart neutron
  • “ex-net” is the name of the physical network (physnet) I used in my answer file.  This is important that you keep the same name.  Openstack doesn’t attach to a physical interface name like “eth0”, instead we use an alias called “physnet”.   This way, multiple servers in our stack could have different physical interface names, but always use the same alias.
  • I have created by public network under my “services” tenant.  But all tenants will see this network.
  • Obviously, change the public subnet configuration to what make sense for you.   Floating IPs will be allocated in the allocation-pool range define by this subnet.

Now, in a tenant network configuration, I need to create at least one router and a private (VXLAN based) network for my instances to boot on:

neutron router-create router
neutron router-gateway-set router public
neutron net-create private
neutron subnet-create private 10.10.10.0/24 --name private --gateway 10.10.10.1 --enable-dhcp --dns-nameserver 8.8.8.8
neutron router-interface-add router private

We should be all good now, let’s boot our first instance on our private network:

nova boot --flavor m1.tiny --image cirros cirros01

Log into horizon, you should see your cirros image and have access to the cirros console.   Update your security groups to allow SSH and assign a floating IP if you’d like to connect to is remotely.

Configuring provider networks

Let’s now see how we can avoid using floating IPs, routers and virtual networks.    In many cases, it make sense to just boot up Openstack instances on an existing physical network (provider network).

To do that, you need to first rollback all the configuration you did to create tenant networks in the previous step.   You could also have a mix of tenant and provider networks if you have other network interfaces configured on your Openstack servers.  That said, the following instructions will assume that you rollbacked all previous configuration and are using the same “eth0 => ex-net => br-ex” configuration.

When an instance is booting up, the instance reaches out to the metadata service to get configuration details.   As an example, if you launch this curl request from an Openstack instance, you will get your hostname back:

# curl http://169.254.169.254/2009-04-04/meta-data/hostname
cirros01

What you need to understand before moving to provider networks is how your instances get network access to the 169.254.169.254 IP address (metadata service).  By default, the virtual router (your instance default gateway), has a route to this metadata service IP.   But when using provider networks, you don’t have this virtual router anymore.   For this reason, you need to change your DHCP agent configuration so that a route is injected into your instance during the DHCP process and your dnsmasq DHCP server will then also route your instances metadata HTTP requests to your metadata server.

# Change the following settings in dhcp_agent.ini
vi /etc/neutron/dhcp_agent.ini

    enable_isolated_metadata=true
    enable_metadata_network=true

# Restart Neutron
openstack-service restart neutron

Now, all I am missing is to create a new public network (provider network) under your tenant allowing him to boot instances directly on this network.

# Get your tenant (project) ID 
openstack project list

# Create a new network under this tenant ID
neutron net-create public --provider:physical_network ex-net --provider:network_type flat --tenant-id e4455fcc1d82475b8a3a13f656ac701f

# Create a subnet for this network
neutron subnet-create public 172.16.185.0/24 --name public --allocation-pool start=172.16.185.10,end=172.16.185.100 --gateway=172.16.185.2

Now, boot an instance:

nova boot --flavor m1.tiny --image cirros cirros01

 

 

 

 

Installing Red Hat Openstack 5 on RHEL7

This installation procedure is a simple way (using packstack) to deploy a multi-node environment in a few minutes.

Using this procedure, all services will be installed on your controller node except for compute nodes which you can be outsourced to other servers.   Here is a simple diagram showing my setup:

Screen Shot 2014-10-05 at 10.11.43 AM

Obviously, this is not following best-practices.  But it’s an easy way to get Openstack up in a few minutes and test functionalities including live-migration between hosts.

We will configure Neutron to use VXLAN to encapsulate traffic between your hosts and provide full SDN capabilities.

Install RHEL7 basic on all your nodes.  

All nodes should have two interfaces (public, private).  That said, your public interface will only be used on your controller node.  You can disable the public interface on your compute node later if you’d like.

Register / Update / Disable Network Manager  (all your nodes)

subscription-manager register
subscription-manager subscribe --auto
subscription-manager repos --disable=*
subscription-manager repos --enable=rhel-7-server-rpms
subscription-manager repos --enable=rhel-7-server-openstack-5.0-rpms

yum -y update

systemctl disable NetworkManager

Verify that your network interfaces in /etc/sysconfig/network-scripts have an entry called : DEVICE=<interface_name>.  When disabling Network Manager, your interface will not come back up if this entry is missing.

reboot

Disable SELINUX on all your hosts

setenforce 0
vi /etc/sysconfig/selinux

Install NFS server on your controller node for Cinder and Nova instances

yum groupinstall -y file-server
firewall-cmd --permanent --add-service=nfs
firewall-cmd --reload
systemctl enable rpcbind
systemctl enable nfs-server
systemctl enable nfs-lock
systemctl enable nfs-idmap
systemctl start rpcbind
systemctl start nfs-server
systemctl start nfs-lock
systemctl start nfs-idmap
mkdir -p /exports/cinder
chmod 777 /exports/cinder
mkdir -p /exports/nova
chmod 777 /exports/nova

vi /etc/exports
    /exports/cinder  *(rw,no_root_squash)
    /exports/nova  *(rw,no_root_squash)

exportfs -avr
systemctl restart nfs-server

* Obviously, 777 permissions are not ideal.  But once packstack installation is completed, you can come back and change ownership of these folders to the appropriate cinder & nova users.

Install Packstack

yum install -y openstack-packstack

Generate SSH keys

ssh-keygen

Generate a packstack answer file

packstack --gen-answer-file=/root/answers.txt

Edit the answer file to provide all configuration details

vi /root/answers.txt

Use my answer file as an example to validate all your settings.
packstack answer file

 

You could also just use my file but by generating a new file, you are making sure you are compatible with the latest packstack version.

vi /root/answers.txt <= Update all IP addressed with the appropriate IP for you + all other details unique to your environment (NFS share, etc…). Most of the file should be good as-is.

Run packstack

packstack --answer-file=/root/answers.txt

Configure your External Bridge

An external bridge named BR-EX must be configured on your controller node to let your host reach your external (public) network.  You can get this done automatically by creating a new file named /etc/sysconfig/ifcfg-br-ex

You also need to modify your existing public interface in /etc/sysconfig.

The idea is to move the IP address to your bridge and connect your physical interface as an Openvswitch port on your bridge instead.

Here are some configuration file examples.  Just copy this but obviously, replace configuration values with your own network settings:

BRIDGE:      ifcfg-br-ex

PUBLIC INTERFACE:   ifcfg-em1

Live migration

At this point, openstack should be up and running but all your Instances (VMs) will be running locally on each compute node under /var/lib/nova/instances.

All you have to do is to mount this folder on a shared NFS server to enable live migration.

On your controller node:

chown nova:nova /exports/nova
rsync -av root@:/var/lib/nova /exports/nova/

On your compute node:

mv /var/lib/nova/instances /var/lib/nova/instances.backup
mount -t nfs :/exports/nova/instances /var/lib/nova/instances

* Obviously, you should add the appropriate line of configuration in your fstab to get this done automatically at boot time

** Commun issues:  Make sure iptable is allowing NFS;  Make your your hosts can resolved each other (or add them in all /etc/hosts files).

 

 

 

My RDO installation procedure

This is OLD.  You should look at this updated article:

http://www.marcoberube.com/archives/346

 

RDO is a community of people using and deploying OpenStack on Red Hat Enterprise Linux, Fedora and distributions derived from these (such as CentOS, Scientific Linux and others).

This procedure describes how to install RDO Icehouse release on CentOS 6.5.  By following this procedure, you will install all Openstack modules on one host but keep your options open if you eventually want to add an additional compute node.

Requirements:
  • Centos 6.5 basic server installation
  • 20GB+ of disk space:  Depends of what you want to do but more is better as you will have to host instances and images as well.
  • 2 network interface:   eth0 for your public network; eth1 for your private (SDN) network.   Both interfaces should have static IP addresses.
Diagram of what we are doing
 Simple Openstack Architecture

* eth1 on a private network is only required if you are planning to grow this environment to more than one host.  If all you need is all-in-one, replace “gre” by “local” in the upcoming network configuration files.

Add RDO channel, install openstack-packstack and update all packages:
yum install -y http://rdo.fedorapeople.org/rdo-release.rpm
yum install -y openstack-packstack
yum -y update
reboot

* Packstack is a puppet based Openstack installer.  Easy way to install Openstack

** Reboot is required if your kernel has been updated

Generate an SSH key
ssh-keygen

Create a packstack answer file (configuration file for the automated installation)

packstack --gen-answer-file=/root/answers.txt

Updating the following settings in your answer file

CONFIG_HEAT_INSTALL=y
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=gre
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1:1000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_KEYSTONE_ADMIN_PW=<your_password>
Start your packstack installation
packstack --answer-file=/root/answers.txt

 * This will take a while.  I fixed many installation errors just by running that install command a second time.  Not sure why (network timeouts maybe..??) but there is so many puppet manifest, I guess it’s easy to have one fail on you and running it one more time often fix the problem.

 Neutron configuration

Ok, this is where everybody has issues.  First thing to understand is that you need an external bridge (ifcfg-br-ex) on your host for Neutron to work properly.  It’s like adding a virtual switch between eth0 and the physical interface.

This new file should have been created during the installation:

/etc/sysconfig/network-scripts/ifcfg-br-ex

We need to move the IP address from eth0 to this bridge and connect eth0 to that bridge instead.

Update this file so it looks like this:

DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=<ip address from eth0>
NETMASK=<netmask from eth0>
GATEWAT<gateway from eth0>
ONBOOT=yes

Now, edit /etc/sysconfig/network-scripts/ifcfg-eth0 so it looks like this:

DEVICE=eth0
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes

* Be careful here, many problems are coming from typos in these files.

Finally, let’s edit /etc/neutron/plugin.ini and make some updates:

[ml2]
type_drivers = gre
tenant_network_types = gre

 * My understanding is that this should have been updated by packstack as we used GRE in our answer file.  That said, at the moment I tested this procedure, that didn’t work.  This is probably due to the fact that Icehouse is now using the ML2 plugin on top of Openvswitch instead of using Openvswitch directly.

 Restart your network
service network restart

* If you lost your network connectivity, something went wrong.  Go back to your console access to validate that ifcfg-eth0 and ifcfg-br-ex are configured properly.

You can use the following Openvswitch command to validate that you have a bridge br-ex and and port named “eth0” connected to that bridge.

ovs-vsctl show
Validating that Openstack is running fine

Once your network connectivity is working, use the following command to check that openstack is running properly:

source /root/keystonerc_admin
openstack-status

This should give you a very long output providing status information on all your modules.

Removing default Neutron configuration

Packstack configures some public network and router by default.  Unless you are very lucky and the public network configured as the exact same IP range as your real public network, let’s remove these settings and start fresh:

neutron router-gateway-clear router1
neutron subnet-delete public_subnet

Now, let’s configure a new router and public subnet.

neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool=start=192.168.122.10,end=192.168.122.20 --gateway=192.168.122.1 public 192.168.122.0/24
neutron router-gateway-set router1 public

* Obviously, change all IP address settings based on your public network settings.   Neutron will not use DHCP to allocate public floating IP addresses, they will be manually assigned from the allocation pool described in this command.  

Get to Horizon web interface in your browser
http://<your_br-ex_ip>/

Login: admin
Password:  <password set in your answer file>

* if you haven’t set your keystone admin password in your answer file, you can find the password automatically generated for you in /root/keystonerc_admin

Setting up virtual network for the admin tenant

First, let’s look at what our network currently looks like.

>> Click on Project/Network/Network Topology

Screen Shot 2014-06-30 at 10.57.29 AM

 

Ok, we have a public network and a subnet assigned to this network.   We don’t want to run instances directly on this network.  Instead, we will create a virtual private network and a virtual router to route our traffic.

>> Click on “Create Network”
Network Name = private

>> Click on “Subnet”
Subnet name = private_subnet
Network Address = 10.0.0.0/16 (or whatever you like)
Gateway IP = 10.0.0.1

>> Click on “Subnet Details”
Enable DHCP = True
DNS Name Servers = 8.8.8.8

>> Click on “Create”

Screen Shot 2014-06-30 at 11.06.32 AM

Now, we need a router to route our traffic from private to public.

>> Click on “Create Router”
Router Name = router1
>> Click on “Create Router”

Aim your mouse over that new router and click on “View router details”

>> Click on “+ Add Interface”
Subnet = Private
IP Address = 10.0.0.1
>> Click on “Add Interface”

>> Click on “Routers” in the left menu to get back to your list of routers : /Project/Network/Routers

>> Click on “Set Gateway”
External Network = public
>> Click on “Set Gateway”

If you look at your “network topology” again, everything should now be connected:

Screen Shot 2014-06-30 at 11.12.08 AM

Yeah!!!  Let’s boot our first instance… shall we????

>> Click on /Project/Compute/Images
An image named “cirros” should already be loaded.  Click on “Launch” to start this image.
Instance name = <your instance name?>
>> Click on “Networking”
Drag “private” in the box of “selected networks”
>> Click “Launch” !!!!

I will not be getting into any details on how to use Openstack here.  This was only an installation procedure.   But if you look at your instances, click on your new instance and check the console, you should see your instance running.   Your instance should have grabbed an IP address from DHCP and be able to reach your external network.

Additional documentation

Here is a good document describing how Neutron works and help you troubleshoot common issues:
http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html

 Coming soon:

– Adding a second compute node