ksm process CPU issue on compute nodes

ksmd allows you to oversubscribe your compute nodes by sharing memory pages between your instances running on a compute node.

A CPU tax is to be expected for this process to do his job.  That said, I have been running into an issue where the CPU tax was over 50%.   This is obviously not acceptable.

Here is how to disable ksmd

echo "KSM_ENABLED=0" > /etc/default/qemu-kvm

Unfortunately, this will mean that you will not be sharing memory pages between instances anymore, using more memory on each node.

ksmd can also be fine-tuned in the following configuration file:


But finding the right parameters for your specific configuration can be a time consuming task.

More information can be found here:

Nested virtualization in Openstack

I personally test all kinds of Openstack setups and needed Openstack to run on Openstack (nested virtualization).

Assuming that you have Intel CPUs, you need the vmx cpu flag to be enabled inside your instances.

On your Openstack compute node, enable nested virtualization at the kernel level:

echo "options kvm-intel nested=y" >> /etc/modprobe.d/dist.conf

I believe the following step might be optional in some cases but I also modify by nova.conf file with the following settings:


* Note that enabling “host-passthrough” will configure your instances CPU with the exact same model as your hardware CPU model. That said, if you have multiple nodes with different CPU models, it will not be possible to live-migrate instances between them anymore.

Reboot your compute node.

Validate that nested virtualization is enable at the kernel level:

# cat /sys/module/kvm_intel/parameters/nested

Validate that virsh capabilities is not supporting the “vmx” feature:

# virsh  capabilities

Lunch an instance on this node, and validate that your instance at the vmx cpu flag enable:

# cat /proc/cpuinfo  |grep vmx

You should not be able to install a new hypervisor inside your instances and support nested virtualization.

Affinity or Anti-Affinity groups

Here is how to create a server group with an “affinity” or “anti-affinity” policy:

First, add the appropriate filters in your nova configuration file (/etc/nova/nova.conf): ServerGroupAffinityFilter, ServerGroupAntiAffinityFilter


Restart nova on your controller node:

openstack-service restart nova

Create a server group with an “affinity” or “anti-affinity” policy. Let’s assume here that I am creating an anti-affinity policy for my database cluster:

nova server-group-create db-cluster1 anti-affinity

*Affinity policy would be exactly the same command, but with “affinity” instead of “anti-affinity” as a second argument.

Find the ID of your new server group:

nova server-group-list

Boot your new database cluster instances with this server group policy:

nova boot --image rhel7 --hint group=1dc16555-872d-4cda-bdf8-69b2816820ae --flavor a1.large --nic net-id=9b97a367-cd0d-4d30-a395-d10794b1a383 db01
nova boot --image rhel7 --hint group=1dc16555-872d-4cda-bdf8-69b2816820ae --flavor a1.large --nic net-id=9b97a367-cd0d-4d30-a395-d10794b1a383 db02
nova boot --image rhel7 --hint group=1dc16555-872d-4cda-bdf8-69b2816820ae --flavor a1.large --nic net-id=9b97a367-cd0d-4d30-a395-d10794b1a383 db03

All these database servers will be automatically scheduled by nova scheduler to run on different nodes.

Dedicating compute hosts by tenants

Hosts aggregates allow you to group some hosts for different purposes.  In this scenario, I wanted to dedicate some compute hosts for one of my tenant; making sure that no other tenant can provision to these hosts.

First, create a host aggregate with a few hosts:

nova aggregate-create reserved_hosts nova
nova aggregate-add-host reserved_hosts host01.lab.marcoberube.com
nova aggregate-add-host reserved_hosts host02.lab.marcoberube.com

Then, set a “fiter_tenant_id” metadata tag on this aggregate with the id of your tenant

nova aggregate-set-metadata reserved_hosts filter_tenant_id=630fcdd12af7447198afa7a210b5e25f

Finally, change your nova scheduler configuration in /etc/nova/nova.conf to include a new filter named “AggregateMultiTenancyIsolation”. This filter will only allow instances from your specified tenant_id to be provisioned in this aggregates. No other tenant will be allowed on these hosts.


You can find the full list of nova scheduler filters for Juno here:

Openstack instance resize

Short version for demo environment:

Change the following values in nova.conf:


Then, restart nova: openstack-service restart nova

Long version for multi compute node environment:

Add a bash to your nova users on each host:

usermod -s /bin/bash nova

Allow SSH without password between all your hosts under the “nova” user:

cat << EOF > ~/.ssh/config
Host *
    StrictHostKeyChecking no

When your migration is failing, you can reset the state of a instance using the following command:

nova reset-state --active

More details about the second procedure on this blog: