Tags: howto,cloudstack,kvm,iscsi,fcoe
Date: 20250514
Objective:
Consume shared block storage with CloudStack and KVM.Introduction:
By far the most common storage for CloudStack has been NFS - it's very stable and used properly is also very fast. This is the #1 recommendation.So that basically leaves you with 2 choices: a cluster file system like OCFS2 or CLVM (cluster lvm). Alas the implementation used by CloudStack with CLVM is quite old so really there's only one option: a clustered filesystem.
Assumptions:
- your virtualisation platform of choice is CloudStack with KVMRecommendations (for now):
- use OCFS2Limitations and warnings:
- Oracle has capped an OCFS2 cluster to maximum 32 nodes - performance is expected to take a hit beyond that point.Moving on I'm going to assume you have followed the advice and installed Oracle Linux 9 - although the commands might work well on other EL clones, it is not a guarantee.
I'm not going to cover exporting an iSCSI volume, that will be different from case to case, from one appliance to another.
I'm also not - at least at this time - going to cover setting up multipathing, you'll have to do that by yourself.Howto:
We'll start with 2 CloudStack KVM hypervisors, let's call them hostA 192.168.122.10 and hostB 192.168.122.11, although in reality you may use proper hostnames for them,
# disable the firewalld, we won't need it where we're going
systemctl stop firewalld
systemctl mask firewalld
# install the software
dnf -y install ocfs2-tools
# add and verify our cluster
o2cb add-cluster cloudstack
o2cb list-cluster cloudstack
# add the 2 hypervisors (nodes) to the cluster
o2cb add-node cloudstack hostA --ip 192.168.122.10
o2cb add-node cloudstack hostB --ip 192.168.122.11
# verify the cluster again, making sure it's displaying correct info
o2cb list-cluster
# initialise the cluster, making sure to answer the questions correct,
#especially the name of the cluster to starton boot, it should be "cloudstack" in our case
/sbin/o2cb.init configure
# check the state of the cluster
/sbin/o2cb.init status
# enable the required services
systemctl enable --now o2cb
systemctl enable --now ocfs2
# tell the system to reboot in 30 seconds should a kernel panic occur and to also trigger a panic in case of an "oops" (a type of non-fatal error)
# why? because letting the host hang without rebooting could lead to OCFS2 instability, filesystem corruption, split-brain etc, you really want a quick, clean reset
printf 'kernel.panic = 30\nkernel.panic_on_oops = 1' | sudo tee -a /etc/sysctl.d/99-sysctl.conf ; sysctl -p
|
Now on ONLY a single node create the filesystem. I'll assume your iSCSI or FCoE device is present as /dev/sdb.
# instruct mkfs to create a filesystem adequate for storing VM files
# I'm assuming you'll want to have a maximum number of nodes (32) using this filesystem, specify it here, you can't easily grow this later
# "-T vmstore" basically tells the filesystem to use a block size of 4k and a cluster size of 128k
# it would be a good idea to take some time and research best values for your workload here, don't blindly copy/paste from the Internet
mkfs.ocfs2 -T vmstore -N 32 -L cloudstack /dev/sdb
|
Now again on BOTH hypervisors make sure things are still working fine and you can mount the filesystem:
# set it up in /etc/fstab so it gets mounted at boot
mkdir /mnt/ocfs2-cloudstack
echo '/dev/sdb /mnt/ocfs2-cloudstack ocfs2 _netdev,defaults 0 0' | sudo tee -a /etc/fstab
# verify it actually can be mounted and do that now
systemctl daemon-reload
mount -a
df -h /mnt/ocfs2-cloudstack
/sbin/o2cb.init status
# on both hypervisors create files in the new mount point and make sure you can read them from the other node
echo "test from `hostname -f`" > /mnt/ocfs2-cloudstack/`hostname -s`
# optional, but recommended, reboot both nodes when you are done, make sure after reboot OCFS2 starts properly and the mount point is usable
reboot
|