Install Openshift on your MAC in a container

Here is how you can easily run Openshift in a docker container on your MAC.   Super useful to test Openshift to build apps on your laptop before pushing them out.

  1. Install docker:
  2. Install the Openshift CLI (*require a subscription):
  3. Set “” as an insecure registry in Docker preferences.
  4. Install “socat” binary using brew

    $  brew install socat

  5. Install Openshift using the Openshift CLI
    $ oc cluster up


That’s IT !!!!    How cool is this.   If you’re like me and you already had docker and brew installed in your MAC, it’s really less than 5 minutes to get Openshift running.  



How to install Red Hat Openstack 9 with Packstack


Here is an updated version of how to install Red Hat Openstack 9 (Mitaka) with Packstack and perform some common basic configuration changes. The same procedure could be used with RDO if you don’t have subscriptions for Red Hat Openstack. Obviously, the main differences would be that you would use CentOS instead of RHEL, and configure different repos.

Packstack is a simple tool to quickly install Red Hat Openstack in an non-HA configuration.   Where OSP Director would be the appropriate tool for a full HA environment with complete lifecycle capabilities.

Pre-requisites before staring the install:

  • Basic RHEL7 installation
  • Valid Openstack subscription attached to this server
  • Following repos:
    • rhel-7-server-rpms
    • rhel-7-server-extras-rpms
    • rhel-7-server-rh-common-rpms
    • rhel-7-server-optional-rpms
    • rhel-7-openstack-9-tools-rpms
    • rhel-7-openstack-9-rpms
    • rhel-7-openstack-9-optools-rpms
subscription-manager register --username='your_username' --password='your_password'
subscription-manager attach --pool=your_pool_id
subscription-manager repos --disable=*
subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-openstack-9-rpms --enable=rhel-7-server-optional-rpms --enable=rhel-7-server-rh-common-rpms --enable=rhel-7-server-openstack-9-optools-rpms --enable=rhel-7-server-openstack-9-tools-rpms


  • Stop and Disable SELINUX
  • Stop and Disable Firewalld
  • Stop and Disable NetworkManager
  • Make sure you have static IPs on your NICs, not DHCP based.
  • Make sure Hostname and DNS are setup appropriately.  Your hostname should resolve.  Put it in /etc/hosts if required.
  • Update to latest packages (yum -y update) and reboot.

Installing packstack

yum install -y openstack-packstack

Once packstack is installed, you can simply do “packstack –all-in-one” as described in most instructions you will find online.  This is just the simplest way to install with packstack, using default configuration for everything.  That said, in my example here, I want to make some changes to the default config.   So here is usually what I will do:

packstack --gen-answer-file=/root/answers.txt

This will create an answer file with all configuration settings.   I can now change all the settings I want in this file, and then launch packstack again using this file as configuration input.

Here are all the changes I will do in my answer file:

# turning "ON" heat module.  Heat is great! :-)

# also turning "ON" LBaaS.  

# By default, Neutron only allows vxlan networks.
# But I want "flat" and "local" drivers to be there as well.
# (will be required later to create new networks)

# These lines will automatically create an OVS bridge for me.
# This is required by Neutron to get external access.
# Replace "eth0" by the name of your interface

# By default packstack install some demo stuff.  
# Turning this OFF.

The “flat” driver and the OVS bridge configuration is usually where people struggle at first.  Regardless of how we will configure our external networks later (and I’ll talk of tenant vs provider networks later), we will need to create a “flat” network to get outside of Openstack.  VXLAN packets would have no where to go on my network.  Also, an OVS bridge must exist for Neutron to do his job.   The “BRIDGE_MAPPING” and “BRIDGE_IFACES” configuration lines will take care of this automatically for you.

Installing Openstack

Ok, now that packstack is installed and we have an answer file.  Let’s start the installation:

packstack --answer-file=/root/answers.txt

This could take some time to complete based on how fast is your network and access to the RPMS.

Finalizing the configuration of Openstack

You can first check that all openstack services are running:

openstack-service status

You should also see a KeystoneRC file in your root folder:  /root/keystonerc_admin

Source this file to get access to your Openstack cloud:

source /root/keystonerc_admin

At this point, I usually create myself an account with my own password.   *this is an optional step:

openstack user create --project admin --password <password> <username>
openstack role add --user <username> --project admin admin

This way, you can log into Horizon (web portal) using your own credentials instead of using the admin account automatically generated password.

You could also create a new KeystoneRC file with your own credentials for when you are in command line.  Again, optional step!

At least one image in Glance will be required before we can launch anything.  Let’s just upload a small Cirros image for now (ideal for testing as it’s tiny and fast!).

curl -o /root/cirros.img
glance image-create --name='cirros' --container-format=bare --disk-format=qcow2 < /root/cirros.img

Finally, Before you can launch instances (VMs), you must finalize your tenant network configuration:  create at least one network for VMs to boot on.  There is two very common ways to deal with networks in Openstack:  tenant networks vs provider networks.

When using tenant networks, your instances would boot up on a virtual network (usually based on VXLAN) and would require to go trough a virtual Openstack router to get access to the outside world.   To allow access from the outside to the internal virtual network, you would create floating IPs on your virtual router.  The is the most classic way to deploy Openstack.

When using provider networks, you can allow your instances to boot up directly on the external (or public) network.  No need for virtual networks, no need for floating IPs.    This is a much easier way to deal with networks, can provide better performance (no VXLAN encapsulation), but obviously doesn’t scale as much from an IP allocation perspective as each instance will require at least one public IP.

Configuring tenant networks

I first need to create an external (public) network where floating IPs will be assigned:

neutron net-create public --router:external True --provider:physical_network ex-net --provider:network_type flat --tenant-id services

neutron subnet-create public --name public --allocation-pool start=,end= --disable-dhcp --gateway=

A few things to note here:  

  • This will only work if I have allowed “flat” network drivers in my answers.txt file.  If I haven’t done that, I will need to update neutron manually to allow “flat” driver and restart neutron:  vi /etc/neutron/plugin.ini;  openstack-service restart neutron
  • “ex-net” is the name of the physical network (physnet) I used in my answer file.  This is important that you keep the same name.  Openstack doesn’t attach to a physical interface name like “eth0”, instead we use an alias called “physnet”.   This way, multiple servers in our stack could have different physical interface names, but always use the same alias.
  • I have created by public network under my “services” tenant.  But all tenants will see this network.
  • Obviously, change the public subnet configuration to what make sense for you.   Floating IPs will be allocated in the allocation-pool range define by this subnet.

Now, in a tenant network configuration, I need to create at least one router and a private (VXLAN based) network for my instances to boot on:

neutron router-create router
neutron router-gateway-set router public
neutron net-create private
neutron subnet-create private --name private --gateway --enable-dhcp --dns-nameserver
neutron router-interface-add router private

We should be all good now, let’s boot our first instance on our private network:

nova boot --flavor m1.tiny --image cirros cirros01

Log into horizon, you should see your cirros image and have access to the cirros console.   Update your security groups to allow SSH and assign a floating IP if you’d like to connect to is remotely.

Configuring provider networks

Let’s now see how we can avoid using floating IPs, routers and virtual networks.    In many cases, it make sense to just boot up Openstack instances on an existing physical network (provider network).

To do that, you need to first rollback all the configuration you did to create tenant networks in the previous step.   You could also have a mix of tenant and provider networks if you have other network interfaces configured on your Openstack servers.  That said, the following instructions will assume that you rollbacked all previous configuration and are using the same “eth0 => ex-net => br-ex” configuration.

When an instance is booting up, the instance reaches out to the metadata service to get configuration details.   As an example, if you launch this curl request from an Openstack instance, you will get your hostname back:

# curl

What you need to understand before moving to provider networks is how your instances get network access to the IP address (metadata service).  By default, the virtual router (your instance default gateway), has a route to this metadata service IP.   But when using provider networks, you don’t have this virtual router anymore.   For this reason, you need to change your DHCP agent configuration so that a route is injected into your instance during the DHCP process and your dnsmasq DHCP server will then also route your instances metadata HTTP requests to your metadata server.

# Change the following settings in dhcp_agent.ini
vi /etc/neutron/dhcp_agent.ini


# Restart Neutron
openstack-service restart neutron

Now, all I am missing is to create a new public network (provider network) under your tenant allowing him to boot instances directly on this network.

# Get your tenant (project) ID 
openstack project list

# Create a new network under this tenant ID
neutron net-create public --provider:physical_network ex-net --provider:network_type flat --tenant-id e4455fcc1d82475b8a3a13f656ac701f

# Create a subnet for this network
neutron subnet-create public --name public --allocation-pool start=,end= --gateway=

Now, boot an instance:

nova boot --flavor m1.tiny --image cirros cirros01





Heat template to load-balance a web stack

Here is a heat template to create a web stack.  This template will create a resource group of 3 web servers (you must have a web server image, mine is named webv2), a load-balancing pool with an internal VIP and finally, a floating IP pointing to this VIP for external access.

* Note that this heat template assumes that you only have one private network. You will have to specify which subnet you want to use if you have more than one. You should create your public network under “services” project as an external network. This way, your internal network will be the default one if you only have one.

heat_template_version: 2015-04-30

description: |   
  Heat template provisioning a stack a web servers,
  a load-balancer and a floating IP mapped to the 
  load-balacing VIP.

    type: string
    label: image
    description: Image name
    default: webv2
    type: string
    label: flavor
    description: Flavor name
    default: m1.tiny

    type: OS::Heat::ResourceGroup
      count: 3
        type: OS::Nova::Server
          image: { get_param: image }
          flavor: { get_param: flavor }
          - default
          - web
    type: OS::Neutron::Pool
      name: mypool1
      protocol: HTTP
      lb_method: ROUND_ROBIN
      subnet: web 
      vip: {"protocol_port": 80}
    type: OS::Neutron::LoadBalancer
      members: { get_attr: [web_nodes, refs] }
      pool_id: { get_resource: pool }
      protocol_port: 80
    type: OS::Neutron::FloatingIP
      fixed_ip_address: { get_attr: [pool, vip, address] }
      floating_network: public
      port_id: { get_attr: [pool, vip, port_id] }

    description: Service public VIP
    value: { get_attr: [floatingip, floating_ip_address] }
    description: Internal VIP
    value: { get_attr: [pool, vip, address] }


Adding physical network connectivity (provider network)

As an example, let’s assume that you would want to run your web servers in Openstack to leverage LBaaS and autoscalling.  But that for some reasons, you would need these web servers to have access to virtual machines or physical servers running on a physical network.  Here are some instructions to do that.

Provider network

In this example, the public network ( is the network from which my users will access my web application.   A floating IP will be mapped to a load-balancing pool VIP to distribute the load between my web servers.  The private network ( is a VXLAN virtual network inside Openstack Neutron.  But the prov1 network is another physical network where my database server(s) will reside.  Another use-case could be for getting access to another data center or a remote office from my internal Openstack virtual network.

Step 1 – Create a new bridge on your controller node(s) and add a physical interface to this bridge.

I created a new bridge called br-prov1:

# cat /etc/sysconfig/network-scripts/ifcfg-br-prov1 

And attached my physical interface (eth1) to this new bridge:

# cat /etc/sysconfig/network-scripts/ifcfg-eth1 

Step 2 – Create a new physnet device in OVS

Update “bridge_mappings” setting in the following file with your new physnet:


Step 3 – Finally, create a new flat network (flat driver must be enabled in /etc/neutron/plugin.ini)

In this example, I am creating a shared flat network under my services project.  This way, all projects will have access to this physical network.  That said, you could also create this network under a specific project.


neutron net-create

Once this is completed, don’t forget to setup your subnet (providing all subnet configuration details) and attach it to your router as per the diagram at the top of this article.

Step 4 – Routing back

Don’t forget to add a route back from your backend provider network.  In my example, all traffic is routed by my virtual router through   So my backend servers needed the following route back: =>


* Please note that following these instructions will not allow you to boot openstack instances on this provider network.  Additional steps are required to do so.  All compute nodes would require access to this br-prov1 bridge, metadata server access would be required from this network to receive cloud-init info and finally, IP management would not be managed my openstack dhcp service anymore.  I’ll try to create another article when possible to provide mode details about this.


ksm process CPU issue on compute nodes

ksmd allows you to oversubscribe your compute nodes by sharing memory pages between your instances running on a compute node.

A CPU tax is to be expected for this process to do his job.  That said, I have been running into an issue where the CPU tax was over 50%.   This is obviously not acceptable.

Here is how to disable ksmd

echo "KSM_ENABLED=0" > /etc/default/qemu-kvm

Unfortunately, this will mean that you will not be sharing memory pages between instances anymore, using more memory on each node.

ksmd can also be fine-tuned in the following configuration file:


But finding the right parameters for your specific configuration can be a time consuming task.

More information can be found here:

Nested virtualization in Openstack

I personally test all kinds of Openstack setups and needed Openstack to run on Openstack (nested virtualization).

Assuming that you have Intel CPUs, you need the vmx cpu flag to be enabled inside your instances.

On your Openstack compute node, enable nested virtualization at the kernel level:

echo "options kvm-intel nested=y" >> /etc/modprobe.d/dist.conf

I believe the following step might be optional in some cases but I also modify by nova.conf file with the following settings:


* Note that enabling “host-passthrough” will configure your instances CPU with the exact same model as your hardware CPU model. That said, if you have multiple nodes with different CPU models, it will not be possible to live-migrate instances between them anymore.

Reboot your compute node.

Validate that nested virtualization is enable at the kernel level:

# cat /sys/module/kvm_intel/parameters/nested

Validate that virsh capabilities is not supporting the “vmx” feature:

# virsh  capabilities

Lunch an instance on this node, and validate that your instance at the vmx cpu flag enable:

# cat /proc/cpuinfo  |grep vmx

You should not be able to install a new hypervisor inside your instances and support nested virtualization.

Affinity or Anti-Affinity groups

Here is how to create a server group with an “affinity” or “anti-affinity” policy:

First, add the appropriate filters in your nova configuration file (/etc/nova/nova.conf): ServerGroupAffinityFilter, ServerGroupAntiAffinityFilter


Restart nova on your controller node:

openstack-service restart nova

Create a server group with an “affinity” or “anti-affinity” policy. Let’s assume here that I am creating an anti-affinity policy for my database cluster:

nova server-group-create db-cluster1 anti-affinity

*Affinity policy would be exactly the same command, but with “affinity” instead of “anti-affinity” as a second argument.

Find the ID of your new server group:

nova server-group-list

Boot your new database cluster instances with this server group policy:

nova boot --image rhel7 --hint group=1dc16555-872d-4cda-bdf8-69b2816820ae --flavor a1.large --nic net-id=9b97a367-cd0d-4d30-a395-d10794b1a383 db01
nova boot --image rhel7 --hint group=1dc16555-872d-4cda-bdf8-69b2816820ae --flavor a1.large --nic net-id=9b97a367-cd0d-4d30-a395-d10794b1a383 db02
nova boot --image rhel7 --hint group=1dc16555-872d-4cda-bdf8-69b2816820ae --flavor a1.large --nic net-id=9b97a367-cd0d-4d30-a395-d10794b1a383 db03

All these database servers will be automatically scheduled by nova scheduler to run on different nodes.

Dedicating compute hosts by tenants

Hosts aggregates allow you to group some hosts for different purposes.  In this scenario, I wanted to dedicate some compute hosts for one of my tenant; making sure that no other tenant can provision to these hosts.

First, create a host aggregate with a few hosts:

nova aggregate-create reserved_hosts nova
nova aggregate-add-host reserved_hosts
nova aggregate-add-host reserved_hosts

Then, set a “fiter_tenant_id” metadata tag on this aggregate with the id of your tenant

nova aggregate-set-metadata reserved_hosts filter_tenant_id=630fcdd12af7447198afa7a210b5e25f

Finally, change your nova scheduler configuration in /etc/nova/nova.conf to include a new filter named “AggregateMultiTenancyIsolation”. This filter will only allow instances from your specified tenant_id to be provisioned in this aggregates. No other tenant will be allowed on these hosts.


You can find the full list of nova scheduler filters for Juno here:

Multiple cinder backends

In this scenario, I wanted to have two cinder backends both on NFS. My first NFS share is backed by HDD, my second one by SSD. This could be renamed “fast” and “slow” or any other names.

First, add your two backends in /etc/cinder/cinder.conf


Then, at the end of your cinder.conf file, add the configuration details for each backend. Each backend is defined after the backend name in brackets [backend_name]. Obviously, you could use another driver than NFS if you have another type of storage.



Restart your cinder services:

openstack-service restart cinder

Create your NFS share location file for each offering:

# vi /etc/cinder/nfs_shares_ssd.conf

# vi /etc/cinder/nfs_shares_hdd.conf

Finally, create your new cinder types and set the backend name for each type as defined in your cinder.conf file:

cinder type-create hdd
cinder type-key hdd set volume_backend_name=hdd
cinder type-create ssd
cinder type-key ssd set volume_backend_name=ssd

You should now be able to create cinder volumes for both types of offering. You can review your configuration by listing the specs:

cinder extra-specs-list

Logs are in /var/log/cinder/*.log if you need to troubleshoot any issue.