How to install Red Hat Openstack 9 with Packstack

Introduction

Here is an updated version of how to install Red Hat Openstack 9 (Mitaka) with Packstack and perform some common basic configuration changes. The same procedure could be used with RDO if you don’t have subscriptions for Red Hat Openstack. Obviously, the main differences would be that you would use CentOS instead of RHEL, and configure different repos.

Packstack is a simple tool to quickly install Red Hat Openstack in an non-HA configuration.   Where OSP Director would be the appropriate tool for a full HA environment with complete lifecycle capabilities.

Pre-requisites before staring the install:

  • Basic RHEL7 installation
  • Valid Openstack subscription attached to this server
  • Following repos:
    • rhel-7-server-rpms
    • rhel-7-server-extras-rpms
    • rhel-7-server-rh-common-rpms
    • rhel-7-server-optional-rpms
    • rhel-7-openstack-9-tools-rpms
    • rhel-7-openstack-9-rpms
    • rhel-7-openstack-9-optools-rpms
# ATTACHING SUBS AND REPOS
subscription-manager register --username='your_username' --password='your_password'
subscription-manager attach --pool=your_pool_id
subscription-manager repos --disable=*
subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-openstack-9-rpms --enable=rhel-7-server-optional-rpms --enable=rhel-7-server-rh-common-rpms --enable=rhel-7-server-openstack-9-optools-rpms --enable=rhel-7-server-openstack-9-tools-rpms

# FOR RDO ON CENTOS, YOU CAN FIND HOW TO CONFIGURE REPOS HERE: https://www.rdoproject.org/install/quickstart/

  • Stop and Disable SELINUX
  • Stop and Disable Firewalld
  • Stop and Disable NetworkManager
  • Make sure you have static IPs on your NICs, not DHCP based.
  • Make sure Hostname and DNS are setup appropriately.  Your hostname should resolve.  Put it in /etc/hosts if required.
  • Update to latest packages (yum -y update) and reboot.

Installing packstack

yum install -y openstack-packstack

Once packstack is installed, you can simply do “packstack –all-in-one” as described in most instructions you will find online.  This is just the simplest way to install with packstack, using default configuration for everything.  That said, in my example here, I want to make some changes to the default config.   So here is usually what I will do:

packstack --gen-answer-file=/root/answers.txt

This will create an answer file with all configuration settings.   I can now change all the settings I want in this file, and then launch packstack again using this file as configuration input.

Here are all the changes I will do in my answer file:

# turning "ON" heat module.  Heat is great! :-)
CONFIG_HEAT_INSTALL = y

# also turning "ON" LBaaS.  
CONFIG_LBAAS_INSTALL = y

# By default, Neutron only allows vxlan networks.
# But I want "flat" and "local" drivers to be there as well.
# (will be required later to create new networks)
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan,flat,local

# These lines will automatically create an OVS bridge for me.
# This is required by Neutron to get external access.
# Replace "eth0" by the name of your interface
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=ex-net:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eth0

# By default packstack install some demo stuff.  
# Turning this OFF.
CONFIG_PROVISION_DEMO=n

The “flat” driver and the OVS bridge configuration is usually where people struggle at first.  Regardless of how we will configure our external networks later (and I’ll talk of tenant vs provider networks later), we will need to create a “flat” network to get outside of Openstack.  VXLAN packets would have no where to go on my network.  Also, an OVS bridge must exist for Neutron to do his job.   The “BRIDGE_MAPPING” and “BRIDGE_IFACES” configuration lines will take care of this automatically for you.

Installing Openstack

Ok, now that packstack is installed and we have an answer file.  Let’s start the installation:

packstack --answer-file=/root/answers.txt

This could take some time to complete based on how fast is your network and access to the RPMS.

Finalizing the configuration of Openstack

You can first check that all openstack services are running:

openstack-service status

You should also see a KeystoneRC file in your root folder:  /root/keystonerc_admin

Source this file to get access to your Openstack cloud:

source /root/keystonerc_admin

At this point, I usually create myself an account with my own password.   *this is an optional step:

openstack user create --project admin --password <password> <username>
openstack role add --user <username> --project admin admin

This way, you can log into Horizon (web portal) using your own credentials instead of using the admin account automatically generated password.

You could also create a new KeystoneRC file with your own credentials for when you are in command line.  Again, optional step!

At least one image in Glance will be required before we can launch anything.  Let’s just upload a small Cirros image for now (ideal for testing as it’s tiny and fast!).

curl -o /root/cirros.img http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
glance image-create --name='cirros' --container-format=bare --disk-format=qcow2 < /root/cirros.img

Finally, Before you can launch instances (VMs), you must finalize your tenant network configuration:  create at least one network for VMs to boot on.  There is two very common ways to deal with networks in Openstack:  tenant networks vs provider networks.

When using tenant networks, your instances would boot up on a virtual network (usually based on VXLAN) and would require to go trough a virtual Openstack router to get access to the outside world.   To allow access from the outside to the internal virtual network, you would create floating IPs on your virtual router.  The is the most classic way to deploy Openstack.

When using provider networks, you can allow your instances to boot up directly on the external (or public) network.  No need for virtual networks, no need for floating IPs.    This is a much easier way to deal with networks, can provide better performance (no VXLAN encapsulation), but obviously doesn’t scale as much from an IP allocation perspective as each instance will require at least one public IP.

Configuring tenant networks

I first need to create an external (public) network where floating IPs will be assigned:

neutron net-create public --router:external True --provider:physical_network ex-net --provider:network_type flat --tenant-id services

neutron subnet-create public 172.16.185.0/24 --name public --allocation-pool start=172.16.185.10,end=172.16.185.100 --disable-dhcp --gateway=172.16.185.2

A few things to note here:  

  • This will only work if I have allowed “flat” network drivers in my answers.txt file.  If I haven’t done that, I will need to update neutron manually to allow “flat” driver and restart neutron:  vi /etc/neutron/plugin.ini;  openstack-service restart neutron
  • “ex-net” is the name of the physical network (physnet) I used in my answer file.  This is important that you keep the same name.  Openstack doesn’t attach to a physical interface name like “eth0”, instead we use an alias called “physnet”.   This way, multiple servers in our stack could have different physical interface names, but always use the same alias.
  • I have created by public network under my “services” tenant.  But all tenants will see this network.
  • Obviously, change the public subnet configuration to what make sense for you.   Floating IPs will be allocated in the allocation-pool range define by this subnet.

Now, in a tenant network configuration, I need to create at least one router and a private (VXLAN based) network for my instances to boot on:

neutron router-create router
neutron router-gateway-set router public
neutron net-create private
neutron subnet-create private 10.10.10.0/24 --name private --gateway 10.10.10.1 --enable-dhcp --dns-nameserver 8.8.8.8
neutron router-interface-add router private

We should be all good now, let’s boot our first instance on our private network:

nova boot --flavor m1.tiny --image cirros cirros01

Log into horizon, you should see your cirros image and have access to the cirros console.   Update your security groups to allow SSH and assign a floating IP if you’d like to connect to is remotely.

Configuring provider networks

Let’s now see how we can avoid using floating IPs, routers and virtual networks.    In many cases, it make sense to just boot up Openstack instances on an existing physical network (provider network).

To do that, you need to first rollback all the configuration you did to create tenant networks in the previous step.   You could also have a mix of tenant and provider networks if you have other network interfaces configured on your Openstack servers.  That said, the following instructions will assume that you rollbacked all previous configuration and are using the same “eth0 => ex-net => br-ex” configuration.

When an instance is booting up, the instance reaches out to the metadata service to get configuration details.   As an example, if you launch this curl request from an Openstack instance, you will get your hostname back:

# curl http://169.254.169.254/2009-04-04/meta-data/hostname
cirros01

What you need to understand before moving to provider networks is how your instances get network access to the 169.254.169.254 IP address (metadata service).  By default, the virtual router (your instance default gateway), has a route to this metadata service IP.   But when using provider networks, you don’t have this virtual router anymore.   For this reason, you need to change your DHCP agent configuration so that a route is injected into your instance during the DHCP process and your dnsmasq DHCP server will then also route your instances metadata HTTP requests to your metadata server.

# Change the following settings in dhcp_agent.ini
vi /etc/neutron/dhcp_agent.ini

    enable_isolated_metadata=true
    enable_metadata_network=true

# Restart Neutron
openstack-service restart neutron

Now, all I am missing is to create a new public network (provider network) under your tenant allowing him to boot instances directly on this network.

# Get your tenant (project) ID 
openstack project list

# Create a new network under this tenant ID
neutron net-create public --provider:physical_network ex-net --provider:network_type flat --tenant-id e4455fcc1d82475b8a3a13f656ac701f

# Create a subnet for this network
neutron subnet-create public 172.16.185.0/24 --name public --allocation-pool start=172.16.185.10,end=172.16.185.100 --gateway=172.16.185.2

Now, boot an instance:

nova boot --flavor m1.tiny --image cirros cirros01

 

 

 

 

Heat template to load-balance a web stack

Here is a heat template to create a web stack.  This template will create a resource group of 3 web servers (you must have a web server image, mine is named webv2), a load-balancing pool with an internal VIP and finally, a floating IP pointing to this VIP for external access.

* Note that this heat template assumes that you only have one private network. You will have to specify which subnet you want to use if you have more than one. You should create your public network under “services” project as an external network. This way, your internal network will be the default one if you only have one.

heat_template_version: 2015-04-30

description: |   
  Heat template provisioning a stack a web servers,
  a load-balancer and a floating IP mapped to the 
  load-balacing VIP.

parameters:
  image:
    type: string
    label: image
    description: Image name
    default: webv2
  flavor:
    type: string
    label: flavor
    description: Flavor name
    default: m1.tiny

resources:
  web_nodes:
    type: OS::Heat::ResourceGroup
    properties:
      count: 3
      resource_def:
        type: OS::Nova::Server
        properties:
          image: { get_param: image }
          flavor: { get_param: flavor }
          security_groups:
          - default
          - web
  pool:
    type: OS::Neutron::Pool
    properties:
      name: mypool1
      protocol: HTTP
      lb_method: ROUND_ROBIN
      subnet: web 
      vip: {"protocol_port": 80}
  lb:
    type: OS::Neutron::LoadBalancer
    properties:
      members: { get_attr: [web_nodes, refs] }
      pool_id: { get_resource: pool }
      protocol_port: 80
  floatingip:
    type: OS::Neutron::FloatingIP
    properties:
      fixed_ip_address: { get_attr: [pool, vip, address] }
      floating_network: public
      port_id: { get_attr: [pool, vip, port_id] }

outputs:
  FloatingIP:
    description: Service public VIP
    value: { get_attr: [floatingip, floating_ip_address] }
  VIP:
    description: Internal VIP
    value: { get_attr: [pool, vip, address] }

 

Adding physical network connectivity (provider network)

As an example, let’s assume that you would want to run your web servers in Openstack to leverage LBaaS and autoscalling.  But that for some reasons, you would need these web servers to have access to virtual machines or physical servers running on a physical network.  Here are some instructions to do that.

Provider network

In this example, the public network (192.168.5.0/24) is the network from which my users will access my web application.   A floating IP will be mapped to a load-balancing pool VIP to distribute the load between my web servers.  The private network (10.10.10.0/24) is a VXLAN virtual network inside Openstack Neutron.  But the prov1 network is another physical network where my database server(s) will reside.  Another use-case could be for getting access to another data center or a remote office from my internal Openstack virtual network.

Step 1 – Create a new bridge on your controller node(s) and add a physical interface to this bridge.

I created a new bridge called br-prov1:

# cat /etc/sysconfig/network-scripts/ifcfg-br-prov1 
ONBOOT=yes
IPADDR="192.168.3.100"
PREFIX="24"
DEVICE=br-prov1
DEVICETYPE=ovs
OVSBOOTPROTO=none
TYPE=OVSBridge

And attached my physical interface (eth1) to this new bridge:

# cat /etc/sysconfig/network-scripts/ifcfg-eth1 
DEVICE=eth1
DEVICETYPE=ovs
TYPE=OVSPort
OVS_BRIDGE=br-prov1
ONBOOT=yes
BOOTPROTO=none

Step 2 – Create a new physnet device in OVS

Update “bridge_mappings” setting in the following file with your new physnet:
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

bridge_mappings=physnet1:br-ex,physnet2:br-prov1

Step 3 – Finally, create a new flat network (flat driver must be enabled in /etc/neutron/plugin.ini)

In this example, I am creating a shared flat network under my services project.  This way, all projects will have access to this physical network.  That said, you could also create this network under a specific project.

 

neutron net-create

Once this is completed, don’t forget to setup your subnet (providing all subnet configuration details) and attach it to your router as per the diagram at the top of this article.

Step 4 – Routing back

Don’t forget to add a route back from your backend provider network.  In my example, all traffic is routed by my virtual router through 192.168.3.254.   So my backend servers needed the following route back:  10.10.10.0/24 => 192.168.3.254.

 

* Please note that following these instructions will not allow you to boot openstack instances on this provider network.  Additional steps are required to do so.  All compute nodes would require access to this br-prov1 bridge, metadata server access would be required from this network to receive cloud-init info and finally, IP management would not be managed my openstack dhcp service anymore.  I’ll try to create another article when possible to provide mode details about this.

 

ksm process CPU issue on compute nodes

ksmd allows you to oversubscribe your compute nodes by sharing memory pages between your instances running on a compute node.

A CPU tax is to be expected for this process to do his job.  That said, I have been running into an issue where the CPU tax was over 50%.   This is obviously not acceptable.

Here is how to disable ksmd

echo "KSM_ENABLED=0" > /etc/default/qemu-kvm
reboot

Unfortunately, this will mean that you will not be sharing memory pages between instances anymore, using more memory on each node.

ksmd can also be fine-tuned in the following configuration file:

/etc/ksmtuned.conf

But finding the right parameters for your specific configuration can be a time consuming task.

More information can be found here:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/sect-KSM-The_KSM_tuning_service.html

Nested virtualization in Openstack

I personally test all kinds of Openstack setups and needed Openstack to run on Openstack (nested virtualization).

Assuming that you have Intel CPUs, you need the vmx cpu flag to be enabled inside your instances.

On your Openstack compute node, enable nested virtualization at the kernel level:

echo "options kvm-intel nested=y" >> /etc/modprobe.d/dist.conf

I believe the following step might be optional in some cases but I also modify by nova.conf file with the following settings:

virt_type=kvm
...
cpu_mode=host-passthrough

* Note that enabling “host-passthrough” will configure your instances CPU with the exact same model as your hardware CPU model. That said, if you have multiple nodes with different CPU models, it will not be possible to live-migrate instances between them anymore.

Reboot your compute node.

Validate that nested virtualization is enable at the kernel level:

# cat /sys/module/kvm_intel/parameters/nested
Y

Validate that virsh capabilities is not supporting the “vmx” feature:

# virsh  capabilities

Lunch an instance on this node, and validate that your instance at the vmx cpu flag enable:

# cat /proc/cpuinfo  |grep vmx

You should not be able to install a new hypervisor inside your instances and support nested virtualization.

Affinity or Anti-Affinity groups

Here is how to create a server group with an “affinity” or “anti-affinity” policy:

First, add the appropriate filters in your nova configuration file (/etc/nova/nova.conf): ServerGroupAffinityFilter, ServerGroupAntiAffinityFilter

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ImagePropertiesFilter,CoreFilter,AggregateInstanceExtraSpecsFilter,ServerGroupAffinityFilter,ServerGroupAntiAffinityFilter

Restart nova on your controller node:

openstack-service restart nova

Create a server group with an “affinity” or “anti-affinity” policy. Let’s assume here that I am creating an anti-affinity policy for my database cluster:

nova server-group-create db-cluster1 anti-affinity

*Affinity policy would be exactly the same command, but with “affinity” instead of “anti-affinity” as a second argument.

Find the ID of your new server group:

nova server-group-list

Boot your new database cluster instances with this server group policy:

nova boot --image rhel7 --hint group=1dc16555-872d-4cda-bdf8-69b2816820ae --flavor a1.large --nic net-id=9b97a367-cd0d-4d30-a395-d10794b1a383 db01
nova boot --image rhel7 --hint group=1dc16555-872d-4cda-bdf8-69b2816820ae --flavor a1.large --nic net-id=9b97a367-cd0d-4d30-a395-d10794b1a383 db02
nova boot --image rhel7 --hint group=1dc16555-872d-4cda-bdf8-69b2816820ae --flavor a1.large --nic net-id=9b97a367-cd0d-4d30-a395-d10794b1a383 db03

All these database servers will be automatically scheduled by nova scheduler to run on different nodes.

Dedicating compute hosts by tenants

Hosts aggregates allow you to group some hosts for different purposes.  In this scenario, I wanted to dedicate some compute hosts for one of my tenant; making sure that no other tenant can provision to these hosts.

First, create a host aggregate with a few hosts:

nova aggregate-create reserved_hosts nova
nova aggregate-add-host reserved_hosts host01.lab.marcoberube.com
nova aggregate-add-host reserved_hosts host02.lab.marcoberube.com

Then, set a “fiter_tenant_id” metadata tag on this aggregate with the id of your tenant

nova aggregate-set-metadata reserved_hosts filter_tenant_id=630fcdd12af7447198afa7a210b5e25f

Finally, change your nova scheduler configuration in /etc/nova/nova.conf to include a new filter named “AggregateMultiTenancyIsolation”. This filter will only allow instances from your specified tenant_id to be provisioned in this aggregates. No other tenant will be allowed on these hosts.

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,AggregateMultiTenancyIsolation

You can find the full list of nova scheduler filters for Juno here:
http://docs.openstack.org/juno/config-reference/content/section_compute-scheduler.html

Multiple cinder backends

In this scenario, I wanted to have two cinder backends both on NFS. My first NFS share is backed by HDD, my second one by SSD. This could be renamed “fast” and “slow” or any other names.

First, add your two backends in /etc/cinder/cinder.conf

enabled_backends=hdd,ssd

Then, at the end of your cinder.conf file, add the configuration details for each backend. Each backend is defined after the backend name in brackets [backend_name]. Obviously, you could use another driver than NFS if you have another type of storage.

[hdd]
nfs_used_ratio=0.95
nfs_oversub_ratio=1.0
volume_driver=cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config=/etc/cinder/nfs_shares_hdd.conf
volume_backend_name=hdd

[ssd]
nfs_used_ratio=0.95
nfs_oversub_ratio=1.0
volume_driver=cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config=/etc/cinder/nfs_shares_ssd.conf
volume_backend_name=ssd

Restart your cinder services:

openstack-service restart cinder

Create your NFS share location file for each offering:

# vi /etc/cinder/nfs_shares_ssd.conf 
192.168.12.1:/storage/ssd/cinder

# vi /etc/cinder/nfs_shares_hdd.conf 
192.168.12.1:/storage/hdd/cinder

Finally, create your new cinder types and set the backend name for each type as defined in your cinder.conf file:

cinder type-create hdd
cinder type-key hdd set volume_backend_name=hdd
cinder type-create ssd
cinder type-key ssd set volume_backend_name=ssd

You should now be able to create cinder volumes for both types of offering. You can review your configuration by listing the specs:

cinder extra-specs-list

Logs are in /var/log/cinder/*.log if you need to troubleshoot any issue.

Openstack instance resize

Short version for demo environment:

Change the following values in nova.conf:

allow_resize_to_same_host=true
resize_confirm_window=5

Then, restart nova: openstack-service restart nova

Long version for multi compute node environment:

Add a bash to your nova users on each host:

usermod -s /bin/bash nova

Allow SSH without password between all your hosts under the “nova” user:

cat << EOF > ~/.ssh/config
Host *
    StrictHostKeyChecking no
    UserKnownHostsFile=/dev/null
EOF

When your migration is failing, you can reset the state of a instance using the following command:

nova reset-state --active

More details about the second procedure on this blog:
http://funcptr.net/2014/09/29/openstack-resizing-of-instances/

Installing Openstack offline

I recently needed to install Openstack in lab environments with very limited or no internet connectivity.  This is also a very fast way to install multiple nodes without having to download packages from the internet on each individual server.  So even if you have internet connection in your lab, it might be worth doing this to save time.   Once your repos are local, a packstack install is REALLY fast!!!

Here is how I did it:

1.  Download all the packages on a server with internet connectivity:

subscription-manager register
subscription-manager subscribe --pool=[your_pool_id]

(you can find your pool ID by running:   subscription-manager list –available –all)
Then, enable all the channels that you will want to download. For Red Hat Enterprise Linux Openstack 6 (Juno), you will need the following repos:

subscription-manager repos --enable rhel-7-server-rpms --enable rhel-7-server-openstack-6.0-rpms

Once you have all the channels you want enabled, sync these channels locally. In my example here, I will later provide my packages from this server over HTTP, so I will right away download my packages in my html folder under ./repos.

yum install yum-utils httpd
reposync --gpgcheck -lnm --repoid=rhel-7-server-rpms --download_path=/var/www/html/repos
reposync --gpgcheck -lnm --repoid=rhel-7-server-openstack-6.0-rpms --download_path=/var/www/html/repos

You could also backup these packages and move them to any other server to deliver them over HTTP from an internal web server. Once your packages are in the right location, create your repos using the following commands:

yum install createrepo

Copy your file in /var/www/html/repos (if you are moving them to a different server)

createrepo -v /var/www/html/repos/rhel-7-server-rpms
createrepo -v /var/www/html/repos/rhel-7-server-openstack-6.0-rpms

Finally, to use these repos from your servers on your internal network (without internet access), add the following configuration file: /etc/yum.repos.d/local.repo

[rhel-7-server-rpms]
name = Red Hat Enterprise Linux 7 Server (RPMs)
baseurl = http://--your_web_server_ip--/repos/rhel-7-server-rpms
enable = 1
gpgcheck = 0
sslverify = 0
[rhel-7-server-openstack-6.0-rpms]
name = Red Hat Enterprise Linux 7 Server (RPMs)
baseurl = http://--your_web_server_ip--/repos/rhel-7-server-openstack-6.0-rpms
enable = 1
gpgcheck = 0
sslverify = 0

Test this out:

yum repolist
yum update -y
reboot
yum install openstack-packstack
.....

Refer to my openstack installation procedure from here.

You are good to go.  Obviously, there is much better solutions than this available like Satellite or Red Hat Openstack installer. But I know that sometimes when playing in labs, your laptop or very secured area, having an easy offline option can be very helpful.