How to install Red Hat Openstack 9 with Packstack

Introduction

Here is an updated version of how to install Red Hat Openstack 9 (Mitaka) with Packstack and perform some common basic configuration changes. The same procedure could be used with RDO if you don’t have subscriptions for Red Hat Openstack. Obviously, the main differences would be that you would use CentOS instead of RHEL, and configure different repos.

Packstack is a simple tool to quickly install Red Hat Openstack in an non-HA configuration.   Where OSP Director would be the appropriate tool for a full HA environment with complete lifecycle capabilities.

Pre-requisites before staring the install:

  • Basic RHEL7 installation
  • Valid Openstack subscription attached to this server
  • Following repos:
    • rhel-7-server-rpms
    • rhel-7-server-extras-rpms
    • rhel-7-server-rh-common-rpms
    • rhel-7-server-optional-rpms
    • rhel-7-openstack-9-tools-rpms
    • rhel-7-openstack-9-rpms
    • rhel-7-openstack-9-optools-rpms
# ATTACHING SUBS AND REPOS
subscription-manager register --username='your_username' --password='your_password'
subscription-manager attach --pool=your_pool_id
subscription-manager repos --disable=*
subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-openstack-9-rpms --enable=rhel-7-server-optional-rpms --enable=rhel-7-server-rh-common-rpms --enable=rhel-7-server-openstack-9-optools-rpms --enable=rhel-7-server-openstack-9-tools-rpms

# FOR RDO ON CENTOS, YOU CAN FIND HOW TO CONFIGURE REPOS HERE: https://www.rdoproject.org/install/quickstart/

  • Stop and Disable SELINUX
  • Stop and Disable Firewalld
  • Stop and Disable NetworkManager
  • Make sure you have static IPs on your NICs, not DHCP based.
  • Make sure Hostname and DNS are setup appropriately.  Your hostname should resolve.  Put it in /etc/hosts if required.
  • Update to latest packages (yum -y update) and reboot.

Installing packstack

yum install -y openstack-packstack

Once packstack is installed, you can simply do “packstack –all-in-one” as described in most instructions you will find online.  This is just the simplest way to install with packstack, using default configuration for everything.  That said, in my example here, I want to make some changes to the default config.   So here is usually what I will do:

packstack --gen-answer-file=/root/answers.txt

This will create an answer file with all configuration settings.   I can now change all the settings I want in this file, and then launch packstack again using this file as configuration input.

Here are all the changes I will do in my answer file:

# turning "ON" heat module.  Heat is great! :-)
CONFIG_HEAT_INSTALL = y

# also turning "ON" LBaaS.  
CONFIG_LBAAS_INSTALL = y

# By default, Neutron only allows vxlan networks.
# But I want "flat" and "local" drivers to be there as well.
# (will be required later to create new networks)
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan,flat,local

# These lines will automatically create an OVS bridge for me.
# This is required by Neutron to get external access.
# Replace "eth0" by the name of your interface
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=ex-net:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eth0

# By default packstack install some demo stuff.  
# Turning this OFF.
CONFIG_PROVISION_DEMO=n

The “flat” driver and the OVS bridge configuration is usually where people struggle at first.  Regardless of how we will configure our external networks later (and I’ll talk of tenant vs provider networks later), we will need to create a “flat” network to get outside of Openstack.  VXLAN packets would have no where to go on my network.  Also, an OVS bridge must exist for Neutron to do his job.   The “BRIDGE_MAPPING” and “BRIDGE_IFACES” configuration lines will take care of this automatically for you.

Installing Openstack

Ok, now that packstack is installed and we have an answer file.  Let’s start the installation:

packstack --answer-file=/root/answers.txt

This could take some time to complete based on how fast is your network and access to the RPMS.

Finalizing the configuration of Openstack

You can first check that all openstack services are running:

openstack-service status

You should also see a KeystoneRC file in your root folder:  /root/keystonerc_admin

Source this file to get access to your Openstack cloud:

source /root/keystonerc_admin

At this point, I usually create myself an account with my own password.   *this is an optional step:

openstack user create --project admin --password <password> <username>
openstack role add --user <username> --project admin admin

This way, you can log into Horizon (web portal) using your own credentials instead of using the admin account automatically generated password.

You could also create a new KeystoneRC file with your own credentials for when you are in command line.  Again, optional step!

At least one image in Glance will be required before we can launch anything.  Let’s just upload a small Cirros image for now (ideal for testing as it’s tiny and fast!).

curl -o /root/cirros.img http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
glance image-create --name='cirros' --container-format=bare --disk-format=qcow2 < /root/cirros.img

Finally, Before you can launch instances (VMs), you must finalize your tenant network configuration:  create at least one network for VMs to boot on.  There is two very common ways to deal with networks in Openstack:  tenant networks vs provider networks.

When using tenant networks, your instances would boot up on a virtual network (usually based on VXLAN) and would require to go trough a virtual Openstack router to get access to the outside world.   To allow access from the outside to the internal virtual network, you would create floating IPs on your virtual router.  The is the most classic way to deploy Openstack.

When using provider networks, you can allow your instances to boot up directly on the external (or public) network.  No need for virtual networks, no need for floating IPs.    This is a much easier way to deal with networks, can provide better performance (no VXLAN encapsulation), but obviously doesn’t scale as much from an IP allocation perspective as each instance will require at least one public IP.

Configuring tenant networks

I first need to create an external (public) network where floating IPs will be assigned:

neutron net-create public --router:external True --provider:physical_network ex-net --provider:network_type flat --tenant-id services

neutron subnet-create public 172.16.185.0/24 --name public --allocation-pool start=172.16.185.10,end=172.16.185.100 --disable-dhcp --gateway=172.16.185.2

A few things to note here:  

  • This will only work if I have allowed “flat” network drivers in my answers.txt file.  If I haven’t done that, I will need to update neutron manually to allow “flat” driver and restart neutron:  vi /etc/neutron/plugin.ini;  openstack-service restart neutron
  • “ex-net” is the name of the physical network (physnet) I used in my answer file.  This is important that you keep the same name.  Openstack doesn’t attach to a physical interface name like “eth0”, instead we use an alias called “physnet”.   This way, multiple servers in our stack could have different physical interface names, but always use the same alias.
  • I have created by public network under my “services” tenant.  But all tenants will see this network.
  • Obviously, change the public subnet configuration to what make sense for you.   Floating IPs will be allocated in the allocation-pool range define by this subnet.

Now, in a tenant network configuration, I need to create at least one router and a private (VXLAN based) network for my instances to boot on:

neutron router-create router
neutron router-gateway-set router public
neutron net-create private
neutron subnet-create private 10.10.10.0/24 --name private --gateway 10.10.10.1 --enable-dhcp --dns-nameserver 8.8.8.8
neutron router-interface-add router private

We should be all good now, let’s boot our first instance on our private network:

nova boot --flavor m1.tiny --image cirros cirros01

Log into horizon, you should see your cirros image and have access to the cirros console.   Update your security groups to allow SSH and assign a floating IP if you’d like to connect to is remotely.

Configuring provider networks

Let’s now see how we can avoid using floating IPs, routers and virtual networks.    In many cases, it make sense to just boot up Openstack instances on an existing physical network (provider network).

To do that, you need to first rollback all the configuration you did to create tenant networks in the previous step.   You could also have a mix of tenant and provider networks if you have other network interfaces configured on your Openstack servers.  That said, the following instructions will assume that you rollbacked all previous configuration and are using the same “eth0 => ex-net => br-ex” configuration.

When an instance is booting up, the instance reaches out to the metadata service to get configuration details.   As an example, if you launch this curl request from an Openstack instance, you will get your hostname back:

# curl http://169.254.169.254/2009-04-04/meta-data/hostname
cirros01

What you need to understand before moving to provider networks is how your instances get network access to the 169.254.169.254 IP address (metadata service).  By default, the virtual router (your instance default gateway), has a route to this metadata service IP.   But when using provider networks, you don’t have this virtual router anymore.   For this reason, you need to change your DHCP agent configuration so that a route is injected into your instance during the DHCP process and your dnsmasq DHCP server will then also route your instances metadata HTTP requests to your metadata server.

# Change the following settings in dhcp_agent.ini
vi /etc/neutron/dhcp_agent.ini

    enable_isolated_metadata=true
    enable_metadata_network=true

# Restart Neutron
openstack-service restart neutron

Now, all I am missing is to create a new public network (provider network) under your tenant allowing him to boot instances directly on this network.

# Get your tenant (project) ID 
openstack project list

# Create a new network under this tenant ID
neutron net-create public --provider:physical_network ex-net --provider:network_type flat --tenant-id e4455fcc1d82475b8a3a13f656ac701f

# Create a subnet for this network
neutron subnet-create public 172.16.185.0/24 --name public --allocation-pool start=172.16.185.10,end=172.16.185.100 --gateway=172.16.185.2

Now, boot an instance:

nova boot --flavor m1.tiny --image cirros cirros01

 

 

 

 

Heat template to load-balance a web stack

Here is a heat template to create a web stack.  This template will create a resource group of 3 web servers (you must have a web server image, mine is named webv2), a load-balancing pool with an internal VIP and finally, a floating IP pointing to this VIP for external access.

* Note that this heat template assumes that you only have one private network. You will have to specify which subnet you want to use if you have more than one. You should create your public network under “services” project as an external network. This way, your internal network will be the default one if you only have one.

heat_template_version: 2015-04-30

description: |   
  Heat template provisioning a stack a web servers,
  a load-balancer and a floating IP mapped to the 
  load-balacing VIP.

parameters:
  image:
    type: string
    label: image
    description: Image name
    default: webv2
  flavor:
    type: string
    label: flavor
    description: Flavor name
    default: m1.tiny

resources:
  web_nodes:
    type: OS::Heat::ResourceGroup
    properties:
      count: 3
      resource_def:
        type: OS::Nova::Server
        properties:
          image: { get_param: image }
          flavor: { get_param: flavor }
          security_groups:
          - default
          - web
  pool:
    type: OS::Neutron::Pool
    properties:
      name: mypool1
      protocol: HTTP
      lb_method: ROUND_ROBIN
      subnet: web 
      vip: {"protocol_port": 80}
  lb:
    type: OS::Neutron::LoadBalancer
    properties:
      members: { get_attr: [web_nodes, refs] }
      pool_id: { get_resource: pool }
      protocol_port: 80
  floatingip:
    type: OS::Neutron::FloatingIP
    properties:
      fixed_ip_address: { get_attr: [pool, vip, address] }
      floating_network: public
      port_id: { get_attr: [pool, vip, port_id] }

outputs:
  FloatingIP:
    description: Service public VIP
    value: { get_attr: [floatingip, floating_ip_address] }
  VIP:
    description: Internal VIP
    value: { get_attr: [pool, vip, address] }

 

Nested virtualization in Openstack

I personally test all kinds of Openstack setups and needed Openstack to run on Openstack (nested virtualization).

Assuming that you have Intel CPUs, you need the vmx cpu flag to be enabled inside your instances.

On your Openstack compute node, enable nested virtualization at the kernel level:

echo "options kvm-intel nested=y" >> /etc/modprobe.d/dist.conf

I believe the following step might be optional in some cases but I also modify by nova.conf file with the following settings:

virt_type=kvm
...
cpu_mode=host-passthrough

* Note that enabling “host-passthrough” will configure your instances CPU with the exact same model as your hardware CPU model. That said, if you have multiple nodes with different CPU models, it will not be possible to live-migrate instances between them anymore.

Reboot your compute node.

Validate that nested virtualization is enable at the kernel level:

# cat /sys/module/kvm_intel/parameters/nested
Y

Validate that virsh capabilities is not supporting the “vmx” feature:

# virsh  capabilities

Lunch an instance on this node, and validate that your instance at the vmx cpu flag enable:

# cat /proc/cpuinfo  |grep vmx

You should not be able to install a new hypervisor inside your instances and support nested virtualization.

Affinity or Anti-Affinity groups

Here is how to create a server group with an “affinity” or “anti-affinity” policy:

First, add the appropriate filters in your nova configuration file (/etc/nova/nova.conf): ServerGroupAffinityFilter, ServerGroupAntiAffinityFilter

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ImagePropertiesFilter,CoreFilter,AggregateInstanceExtraSpecsFilter,ServerGroupAffinityFilter,ServerGroupAntiAffinityFilter

Restart nova on your controller node:

openstack-service restart nova

Create a server group with an “affinity” or “anti-affinity” policy. Let’s assume here that I am creating an anti-affinity policy for my database cluster:

nova server-group-create db-cluster1 anti-affinity

*Affinity policy would be exactly the same command, but with “affinity” instead of “anti-affinity” as a second argument.

Find the ID of your new server group:

nova server-group-list

Boot your new database cluster instances with this server group policy:

nova boot --image rhel7 --hint group=1dc16555-872d-4cda-bdf8-69b2816820ae --flavor a1.large --nic net-id=9b97a367-cd0d-4d30-a395-d10794b1a383 db01
nova boot --image rhel7 --hint group=1dc16555-872d-4cda-bdf8-69b2816820ae --flavor a1.large --nic net-id=9b97a367-cd0d-4d30-a395-d10794b1a383 db02
nova boot --image rhel7 --hint group=1dc16555-872d-4cda-bdf8-69b2816820ae --flavor a1.large --nic net-id=9b97a367-cd0d-4d30-a395-d10794b1a383 db03

All these database servers will be automatically scheduled by nova scheduler to run on different nodes.

Dedicating compute hosts by tenants

Hosts aggregates allow you to group some hosts for different purposes.  In this scenario, I wanted to dedicate some compute hosts for one of my tenant; making sure that no other tenant can provision to these hosts.

First, create a host aggregate with a few hosts:

nova aggregate-create reserved_hosts nova
nova aggregate-add-host reserved_hosts host01.lab.marcoberube.com
nova aggregate-add-host reserved_hosts host02.lab.marcoberube.com

Then, set a “fiter_tenant_id” metadata tag on this aggregate with the id of your tenant

nova aggregate-set-metadata reserved_hosts filter_tenant_id=630fcdd12af7447198afa7a210b5e25f

Finally, change your nova scheduler configuration in /etc/nova/nova.conf to include a new filter named “AggregateMultiTenancyIsolation”. This filter will only allow instances from your specified tenant_id to be provisioned in this aggregates. No other tenant will be allowed on these hosts.

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,AggregateMultiTenancyIsolation

You can find the full list of nova scheduler filters for Juno here:
http://docs.openstack.org/juno/config-reference/content/section_compute-scheduler.html

Multiple cinder backends

In this scenario, I wanted to have two cinder backends both on NFS. My first NFS share is backed by HDD, my second one by SSD. This could be renamed “fast” and “slow” or any other names.

First, add your two backends in /etc/cinder/cinder.conf

enabled_backends=hdd,ssd

Then, at the end of your cinder.conf file, add the configuration details for each backend. Each backend is defined after the backend name in brackets [backend_name]. Obviously, you could use another driver than NFS if you have another type of storage.

[hdd]
nfs_used_ratio=0.95
nfs_oversub_ratio=1.0
volume_driver=cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config=/etc/cinder/nfs_shares_hdd.conf
volume_backend_name=hdd

[ssd]
nfs_used_ratio=0.95
nfs_oversub_ratio=1.0
volume_driver=cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config=/etc/cinder/nfs_shares_ssd.conf
volume_backend_name=ssd

Restart your cinder services:

openstack-service restart cinder

Create your NFS share location file for each offering:

# vi /etc/cinder/nfs_shares_ssd.conf 
192.168.12.1:/storage/ssd/cinder

# vi /etc/cinder/nfs_shares_hdd.conf 
192.168.12.1:/storage/hdd/cinder

Finally, create your new cinder types and set the backend name for each type as defined in your cinder.conf file:

cinder type-create hdd
cinder type-key hdd set volume_backend_name=hdd
cinder type-create ssd
cinder type-key ssd set volume_backend_name=ssd

You should now be able to create cinder volumes for both types of offering. You can review your configuration by listing the specs:

cinder extra-specs-list

Logs are in /var/log/cinder/*.log if you need to troubleshoot any issue.

Openstack instance resize

Short version for demo environment:

Change the following values in nova.conf:

allow_resize_to_same_host=true
resize_confirm_window=5

Then, restart nova: openstack-service restart nova

Long version for multi compute node environment:

Add a bash to your nova users on each host:

usermod -s /bin/bash nova

Allow SSH without password between all your hosts under the “nova” user:

cat << EOF > ~/.ssh/config
Host *
    StrictHostKeyChecking no
    UserKnownHostsFile=/dev/null
EOF

When your migration is failing, you can reset the state of a instance using the following command:

nova reset-state --active

More details about the second procedure on this blog:
http://funcptr.net/2014/09/29/openstack-resizing-of-instances/

Installing Red Hat Openstack 5 on RHEL7

This installation procedure is a simple way (using packstack) to deploy a multi-node environment in a few minutes.

Using this procedure, all services will be installed on your controller node except for compute nodes which you can be outsourced to other servers.   Here is a simple diagram showing my setup:

Screen Shot 2014-10-05 at 10.11.43 AM

Obviously, this is not following best-practices.  But it’s an easy way to get Openstack up in a few minutes and test functionalities including live-migration between hosts.

We will configure Neutron to use VXLAN to encapsulate traffic between your hosts and provide full SDN capabilities.

Install RHEL7 basic on all your nodes.  

All nodes should have two interfaces (public, private).  That said, your public interface will only be used on your controller node.  You can disable the public interface on your compute node later if you’d like.

Register / Update / Disable Network Manager  (all your nodes)

subscription-manager register
subscription-manager subscribe --auto
subscription-manager repos --disable=*
subscription-manager repos --enable=rhel-7-server-rpms
subscription-manager repos --enable=rhel-7-server-openstack-5.0-rpms

yum -y update

systemctl disable NetworkManager

Verify that your network interfaces in /etc/sysconfig/network-scripts have an entry called : DEVICE=<interface_name>.  When disabling Network Manager, your interface will not come back up if this entry is missing.

reboot

Disable SELINUX on all your hosts

setenforce 0
vi /etc/sysconfig/selinux

Install NFS server on your controller node for Cinder and Nova instances

yum groupinstall -y file-server
firewall-cmd --permanent --add-service=nfs
firewall-cmd --reload
systemctl enable rpcbind
systemctl enable nfs-server
systemctl enable nfs-lock
systemctl enable nfs-idmap
systemctl start rpcbind
systemctl start nfs-server
systemctl start nfs-lock
systemctl start nfs-idmap
mkdir -p /exports/cinder
chmod 777 /exports/cinder
mkdir -p /exports/nova
chmod 777 /exports/nova

vi /etc/exports
    /exports/cinder  *(rw,no_root_squash)
    /exports/nova  *(rw,no_root_squash)

exportfs -avr
systemctl restart nfs-server

* Obviously, 777 permissions are not ideal.  But once packstack installation is completed, you can come back and change ownership of these folders to the appropriate cinder & nova users.

Install Packstack

yum install -y openstack-packstack

Generate SSH keys

ssh-keygen

Generate a packstack answer file

packstack --gen-answer-file=/root/answers.txt

Edit the answer file to provide all configuration details

vi /root/answers.txt

Use my answer file as an example to validate all your settings.
packstack answer file

 

You could also just use my file but by generating a new file, you are making sure you are compatible with the latest packstack version.

vi /root/answers.txt <= Update all IP addressed with the appropriate IP for you + all other details unique to your environment (NFS share, etc…). Most of the file should be good as-is.

Run packstack

packstack --answer-file=/root/answers.txt

Configure your External Bridge

An external bridge named BR-EX must be configured on your controller node to let your host reach your external (public) network.  You can get this done automatically by creating a new file named /etc/sysconfig/ifcfg-br-ex

You also need to modify your existing public interface in /etc/sysconfig.

The idea is to move the IP address to your bridge and connect your physical interface as an Openvswitch port on your bridge instead.

Here are some configuration file examples.  Just copy this but obviously, replace configuration values with your own network settings:

BRIDGE:      ifcfg-br-ex

PUBLIC INTERFACE:   ifcfg-em1

Live migration

At this point, openstack should be up and running but all your Instances (VMs) will be running locally on each compute node under /var/lib/nova/instances.

All you have to do is to mount this folder on a shared NFS server to enable live migration.

On your controller node:

chown nova:nova /exports/nova
rsync -av root@:/var/lib/nova /exports/nova/

On your compute node:

mv /var/lib/nova/instances /var/lib/nova/instances.backup
mount -t nfs :/exports/nova/instances /var/lib/nova/instances

* Obviously, you should add the appropriate line of configuration in your fstab to get this done automatically at boot time

** Commun issues:  Make sure iptable is allowing NFS;  Make your your hosts can resolved each other (or add them in all /etc/hosts files).

 

 

 

Executing troubleshooting commands inside a namespace

Here is how you can execute some troubleshooting commands as if you were running them on a virtual router inside Neutron.

[root@serverX ~(keystone_admin)]# neutron router-list
[root@serverX ~(keystone_admin)]# ip netns
[root@serverX ~(keystone_admin)]# ip netns exec qrouter-10bf634d-3228-4041-8f3a-4d5e0e603c07 ping
8.8.8.8
[root@serverX ~(keystone_admin)]# ip netns exec qrouter-10bf634d-3228-4041-8f3a-4d5e0e603c07 netstat
-rn

*  Kernel namespaces are used to secure virtual networks from each other.    This is great from a security perspective but sometimes makes troubleshooting harder.