Skip to content

Entries tagged "kvm".

Cloudmin GPL for Linux KVM

Great news! The good people behind the Webmin project have released their free (as in beer and speech) version of Cloudmin with support for Linux KVM hypervisor.
This is so cool. Can't wait to try it out!
http://webmin.com/cloudmin.html http://webmin.com/cinstall-kvm.html

Virtio drivers for FreeBSD

KVM is quickly becoming the de facto standard in virtualisation (dodge this, Vmware!) and today to make the experience even better I stumbled upon _binary_ virtio drivers for FreeBSD.

http://people.freebsd.org/~kuriyama/virtio/
(copied here, just in case)

Thankfully virtio drivers have been available in FreeBSD for a while now, but you had to compile them as they are not included in the base operating system yet.
This is a good example of how to do it: http://viktorpetersson.com/2011/10/20/how-to-use-virtio-on-freebsd-8-2/ btw.

I'm looking forward to official FreeBSD with virtio drivers in the standard base OS.

GlusterFS 3.4 hits Alpha

People interested in distributed filesystems will be glad to hear GlusterFS has reached v3.4 Alpha.
This new version brings a lot of new and really cool stuff to the table:
    WORM (write once read many)
    Operating version for glusterd
    Block device translator
    Duplicate Request Cache
    Server Quorum
    libgfapi
    VM image storage improvements – not related to QEMU integration; related to performance improvements
    NFSv3 ACL support
The new QEMU integration should massively increase performance when used as backing storage for KVM virtual machines. Really nice!

More info on the project's blog: http://www.gluster.org/2013/02/new-release-glusterfs-3-4alpha/

Cloudstack 4.1.0

A couple of days ago the Apache foundation has released Cloudstack version 4.1.0 which brings a lot of new interesting stuff:
An API discovery service that allows an end point to list its supported APIs and their details.
Added an Events Framework to CloudStack to provide an “event bus” with publish, subscribe, and unsubscribe semantics. Includes a RabbitMQ plugin that can interact with AMQP servers. Introduces the notion of a state change event.
Implement L3 router functionality in the Nicira NVP plugin, and including support for KVM (previously Xen-only).
API request throttling to prevent attacks via frequent API requests.
AWS-style regions.
Egress firewall rules for guest networks.
Resizing root and data volumes.
Reset SSH key to access VMs.
Support for EC2 Query API.
Autoscaling support in conjunction with load balancing devices such as NetScaler.

Looking forward to testing it.
Download from here: http://cloudstack.apt-get.eu/rhel/4.1/
The original announcement here:
https://blogs.apache.org/cloudstack/entry/apache_cloudstack_4_1_0

PS: one can use this for a simple deployment: https://github.com/penguin2716/autoinstall_cloudstack/blob/master/README.org.

Cloudstack 4.2.0 is out!

The Apache foundation announces version 4.2 of Cloudstack cloud platform!
There are loads of new interesting features, check them out:

http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Release_Notes

Taking KVM volume snapshots with Cloudstack 4.2 on CentOS 6.5

Apache Cloudstack cannot currently take KVM VM snapshots, but it can handle ROOT and DATA volume snapshots using qemu-img. This functionality can be enabled in Global Settings -> "kvm.snapshot.enabled".
This feature worked fine in previous versions of CentOS (6.0-6.4), however starting with 6.5 qemu-img no longer recognises the "-s" parameter that Cloudstack uses to take the volume snapshots.

This problem can be worked around in many ways, for example by downgrading qemu-img to the 6.4 version, but this idea may not appeal to those who like to stay up to date.

Another more elegant workaround that I've discovered since getting my hands dirty with ACS is that the script[1] which is responsible for taking the snapshot first looks for a "cloud-qemu-img" in the $PATH, if it can't find any it will fallback on whatever `which qemu-img` returns. So, the solution is as simple as getting the old qemu-img installed as cloud-qemu-img; this can be done like this:

mkdir cloud-qemu-img
cd cloud-qemu-img
wget http://vault.centos.org/6.4/updates/x86_64/Packages/qemu-img-0.12.1.2-2.355.el6_4_4.1.x86_64.rpm
rpm2cpio qemu-img-0.12.1.2-2.355.el6_4_4.1.x86_64.rpm |cpio -idmv
cp ./usr/bin/qemu-img /usr/bin/cloud-qemu-img
Voilà! This is probably the best solution because it doesn't modify the Cloudstack script nor does it interfere with the stock qemu packages.

[1] - /usr/share/cloudstack-common/scripts/storage/qcow2/managesnapshot.sh

Nested virt - Xenserver on KVM

At openvm.eu we need to test templates on Xenserver and KVM, however the basic OS for the build environment is CentOS 7 (with KVM).
In order to test the templates on Xenserver we had to run this HV as a KVM guest (gotta love virtualisation!); however by default Xenserver will complain that you can't run any HVM guests, only paravirt ones (PV). This sucks because PV is used less and less with HVM being in the spotlight.

Luckily with KVM we can forward the VMX CPU flag to a guest and as such make it available to Xenserver, for it's HVM mode.

There are a few things to be aware of though:
1 - in libvirt do give the Xenserver VM a good CPU profile (I used Core2duo) and make sure the VMX flag is set on "require"
2 - stock CentOS 7 kernel has a problem with nested virt at the moment, do use a newer kernel[1] (I'm using kernel-ml from elrepo-kernel)
3 - make sure the kvm_intel module is loaded with the option nested=1. For this to happen I reload/rebooted with this in /etc/modprobe.d/kvm-intel.conf:
options kvm-intel nested=1

Now enjoy docker on centos, in xenserver on kvm on centos. :-)


[1] - https://bugzilla.kernel.org/show_bug.cgi?id=45931 - this will likely be fixed in future CentOS/RH kernel updates, I hope

Protect KVM processes from OOM killer

While running clouds on Linux KVM hypervisors it may happen that some of your virtual machines processes get killed by the OOM killer in order to free up memory.

Depending on your situation, the OOM killer may be instructed not to kill certain processes; but if you go this way make sure you know what you are doing and how resources are used.

So, to proceed with protecting KVM processes from out of memory scenarios, we need to run a few commands:
1 - determine the PID of the processes, we can use pgrep for this
2 - protect them from OOM killer by changing the PIDs oom_adj value to -17 (OOM_DISABLE); if you use a 3.x+ kernel then you need to change oom_score_adj to -1000 instead as oom_adj is deprecated

This can be wrapped up in a one-liner such as this:
for PID in $(pgrep qemu-kvm); do echo -17 > /proc/$PID/oom_adj; done

That would work in CentOS 6, but if you are on a newer kernel than that (say 3.x like the one in CentOS 7) then use this:
for PID in $(pgrep qemu-kvm); do echo -1000 > /proc/$PID/oom_score_adj; done

You might want to double check your KVM processes run as qemu-kvm, that's the program's name in CentOS, it may differ in other distributions.

If you do not want to do this manually every time a VM is created you can simply create a cron job to do it for you every X minutes; if you spin up instances very often then you may set it as frequent as 1 minute:
echo '*/1 * * * * root for PID in $(pgrep qemu-kvm); do echo -1000 > /proc/$PID/oom_score_adj; done' > /etc/cron.d/oomprotect

If you run into memory usage issues, do have a look at KSM as it can help optimise memory utilisation (but at the cost of extra CPU usage).