Carve out free space as a partition on each system. Define the partition as a Physical Volume in the Volume Group cloud. Share the partition via ISCSI (tgtd service), and start the service on all nodes. Execute the following on each node:
#!/bin/shRestart the clvmd and lvm2-monitor service on all nodes. From any node, execute:
#
# join nodes of a GFS cluster via iSCSI
#
if [ "$1" == "join" ]; then
ACT="login"
elif [ "$1" == "leave" ]; then
ACT="logout"
else
echo "Usage: iscsi-cloud join|leave"
echo "GPL 2009, dougbunger"
exit 99
fi
grep "clusternode.*name" /etc/cluster/cluster.conf > /tmp/iscsi-nodes
sed -i "s/nodeid=.[0-9]*//" /tmp/iscsi-nodes
sed -i "s/votes=.[0-9]*//" /tmp/iscsi-nodes
sed -i "s/sed -i "s/[ \"=<>\t]//g" /tmp/iscsi-nodes
ME=`hostname | cut -d\. -f1`
for J in `grep -v $ME /tmp/iscsi-nodes`; do echo "$ACT node $J:"
# discover
L=`iscsiadm -m discovery -t sendtargets -p $J 2> /dev/null`
T=`echo $L | cut -d\ -f2`
echo "...$T"
if [ "$T" != "" ]; then
P=`echo $L | cut -d, -f1`
echo "...$P"
# bind
iscsiadm -m node -T $T -p $P --$ACT
fi
done
vgscan
lvscan
exit
vgdisplay cloudThis will show the total space available across the cloud. Turns our system-config-lvm is cloud aware, and will keep all nodes in sync when creating or adjusting LVs.
No comments:
Post a Comment