Thursday, November 08, 2012

RHEL 6 Clustering, VM Fencing

I recently retasked one of my lab machines as a RedHat virtualization server, which RedHat calls RHEV, but is really KVM.  One of this machine's tasks is to support a test cluster of VMs.  Under normal circumstances, clustering would require a remote management interface such as an ILO, DRAC, or RMM.

As usual, I was disappointed with how difficult this was.  To make matters more difficult, for you, I won't be covering clustering in the article.  This document's scope will be limited setting up to RHEV VM fencing.

On the host machine, we need to install the fence daemon.  Considering this is very lightweight, the I'm going to do a shotgun install:
yum install fence-virtd-*
On my machine, this loaded four packages: the daemon, the interface between the daemon and the hypervisor, and two "plugins".  (The serial plugin is probably not needed.)

The base RPM will provide the /etc/fence_virt.conf file.  Modify it to look like this:
listeners {
  multicast {
    family = "ipv4";
    interface = "virbr0";
    address = "";
    port = "1229";
    key_file = "/etc/cluster/fence_xvm.key";
fence_virtd {
  module_path = "/usr/lib64/fence-virt";
  backend = "libvirt";
  listener = "multicast";
backends {
  libvirt {
    uri = "qemu:///system";
Two things to notice about the config file.  The key_file option is little more than a password in a text file, which is going to have to be duplicated on all the VMs in the cluster.  The "theory" is that only a device with the password will be able to fence other nodes.  This brings us to the second point, the multicast option.  If a cluster node issues a fence command, the symmetric authentication key will be multicast on the network in the clear.  Thus, the reality is that the key_file provides no real security.

Which brings us to a second issue with the multicast.  Per RedHat, cross host fencing is not supported.  As such, all cluster nodes have to exist on the same physical machine, rending real world VM clustering pretty much worthless.  Here's the reality of cross host fencing: It is not supported because of the security concerns of multicasting the clear text fencing password and the fact that RedHat cannot guarantee the multicast configuration of the switch infrastructure.  Given properly configured switches, a dedicated host NIC and virtual bridge in each host, cross host fencing works.  In this lab configuration, however, it is not a concern.

After creating a key_file, open the fenced port in IPtables:
-A INPUT -s -m tcp -p tcp --dport 1229 -j ACCEPT
Copy the key_file to each clustered VM (they don't need the config file) and add the opposite IPtables rule:
-A OUTPUT -d -m tcp -p tcp --dport 1229 -j ACCEPT
On the host, chkconfig and start fence_virtd.  Running netstat should show the host listening on 1229.  What it is listening "for" is the name of a VM to "destroy" (power off.)  This means the names of the cluster nodes and VMs recognized by KVM/QEMU have to match.  On the host, display the status of the VMs using: 
watch virsh list
Given a two node cluster, on node1 issue:
fence_node node2
On the host, the status of node2 should change from running to inactive, and a moment later, back to running.  For testing purposes, the fence_node command can be installed on the host, without the host being part of the cluster.  If you try this using yum, you'll get the entire cluster suite.  Instead, force these three RPMs:
rpm -ivh clusterlib-*.x86_64.rpm  --nodeps
rpm -ivh corosynclib-*.x86_64.rpm  --nodeps
rpm -ivh cman-*.x86_64.rpm  --nodeps

 Truthfully, the better choice is to build a VM to manage the cluster using Luci.

No comments:

Post a Comment