Friday, September 26, 2008

USB Flash Raid for VM

Because I have entirely too much time on my hands, and this is entirely too cool, I have granted one of my Xen VM's access to two USB thumb drives as a Raid Level 1 mirror. Turns out, this taught me a solution to previous problem, as well.

I inserted the two drives in the Dom0 system, where they appeared as /dev/sda and /dev/sdb. I used fdisk to set their types to FD = Linux raid autodetect. Next, I assigned them to the DomU:
# virsh attach-disk valkyrie /dev/sda hdc
# virsh attach-disk valkyrie /dev/sdb hdd
From DomU, define the raid
# fdisk -l 2>/dev/null | grep "/dev/hd.1"
/dev/hdc1 1 489 31270+ fd Linux raid autodetect
/dev/hdd1 1 489 31270+ fd Linux raid autodetect
# mdadm -C /dev/md0 -l 1 -n 2 /dev/hdc1 /dev/hdd1
mdadm: /dev/hdc1 appears to contain an ext2fs file system
size=31268K mtime=Fri Feb 9 13:12:40 2007
mdadm: /dev/hdd1 appears to contain an ext2fs file system
size=31268K mtime=Fri Feb 9 13:12:40 2007
Continue creating array? y
mdadm: array /dev/md0 started.
# mke2fs -j /dev/md0
Mount per SOP.

Unfortunately, this is not persistant across reboots. For this, we need to edit the VM's config file and change the disk line to include:
disk = [ "tap:aio:/xen/valkyrie.img,xvda,w",
      "phys:/dev/sda,hdc,w",
      "phys:/dev/sdb,hdd,w"
]
That should get it.

Friday, September 19, 2008

Windows 98 under Xen: Pt 3

Another "I give up". This is one of those "of course it works... so why bother testing it" situations. From all indications and documentation, it should be supported. Having said that, I can find no evidence of anyone successfully running W98 under Xen.

Maybe if I continued to try, I could get it to work. But you know what? Its just not worth it, because I have a plan. Let's paravirt Fedora, VNC in, and run our W9x application in Wine, instead.

Wednesday, September 17, 2008

Windows 98 under Xen: Pt 2

Oops. That didn't work as well as I'd hoped. This should teach me not to post simply because the install started. (Probably not.)

Once the W98 VM rebooted, it reported Invalid system disk. I did two things to correct this. First, I ensured the LVM partition was set to 500M, as W98 did not provide native support for disks over 512M. Second, I booted from the CD, and rather than installing, selected recovery mode. From there, I fdisk'd the partition to ensure the MBR was set properly.

After a second install, the VM booted (better than before), but crashed as a result of VCACHE: Windows protection error failure.

I Got A Promotion


Apparently, Verisign says I'm the CEO of RedHat. Pretty good trick, considering I've not worked for them for about a year. What's really amazing is the deal they are offering: A 2G falsh drive! Wow!

Click image for full resolution.

Windows 98 under Xen

As I continue to have more and more fun with Xen, I wanted to virtualize a couple old Windows systems. You won't believe my motivation... I only have one license for Windows XP, but wanted a couple other machines. Yes, that's right: I didn't want to pirate XP. Imagine that.

Since I have a couple copies of 95 and 98SE in the filing cabinet, it was time to give them a try. I couldn't get the wizard to work, but had better luck from the command line. The first trick was to add another packager, virt-viewer. The following launched an install:
# virt-install -n VM02-w98 -r 270 -f /dev/vg0/vm2w98 -l /net/scully/var/ftp/iso-w98se/w98se.iso -v --vnc

The first pass required that the virtual disk be intialized, and required a reboot of the guest. The reboot failed, however, forcing me to issue:
# virsh destroy VM02-w98; virsh undefine VM02-w98
This was followed by a second install command, which proceeded as normal.

Monday, September 08, 2008

Cosmic Motors Gallery

Check out Daniel Simon's Super Cool Sci-Fi Cars From Another Galaxy. Great pictures, but I still think aliens shouldn't be speaking English.

Saturday, September 06, 2008

No "Raw Device Mapping" for Xen

One invaluable feature of the Vmware ESX architecture, is the Raw Device Mapping (RDM). This allows a virtual machine to access a partitions and format it using the "native file system of the guest operating system." (In other words, a Windows VM would format it as NTFS. A Linux VM would format it as EXT3.) The advantage to this is that a physical machine could easily access the same data, should the VM crash or become corrupted.

Unfortunately, this doesn't seem to work as I had hoped under Xen. I carved a partition out of Dom0's hard drive and mapped it to the VM:
# virsh attach-disk valkyrie /dev/hda7 xvdb
This failed, until I formatted the partition. That makes sense.

Once the partition attached, the DomU could see the partition as a separate drive, but not as file system. It was necessary to fdisk /dev/xvdb, then mke2fs -j /dev/xvdb1. So far, so good. The tricky part came when I tried to access partition from Dom0:
]# virsh detach-disk valkyrie xvdb
# mount /dev/hda7 /mnt
mount: wrong fs type, bad option, bad superblock on /dev/hda7,
missing codepage or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
Now, it doesn't recognize that the partition is formatted. What a bummer. This seems to imply that the avoid data loss due to a corrupted image file, we need to place the data on Dom0, then cluster to the local VMs through the local bridge, virbr0.

Xen Error (22, "Invaid argument"): Pt 2

I give up. The problem was that domains would install correctly, but fail to start.

At first, I thought the failures were when I would install from a kickstart file. During the install, a template kickstart is created as anaconda.cfg, which contains:
bootloader --location=mbr --driveorder=hda --append="rhgb quiet"
If a domain is manually installed, the line is:
bootloader --location=mbr --driveorder=vxda --append="console=xvc0"
My kickstart was actually misconfiguring the boot loader.

To review the bootloader options, I used:
xm create -c domain
This let me get into Grub. Unfortunately, that means Error 22 isn't an MBR problem. No efforts to fix via Grub provided any help.

Next, I found that I could bypass Grub. The trick was to add extra directives to the domain config file:
kernel = "/boot/vmlinuz-2.6.18-1.2798.fc6xen"
root = "/dev/xvda1 ro"
ramdisk = "/boot/initrd-2.6.18-1.2798.fc6xen.img"
#bootloader="/usr/bin/pygrub"
Notice that the bootloader directive is commented out. This failed marvelously, but got further than before. The errors were similar to a lost root partition.

This caused me to realize two things. First: root should not xvda1 but xvda2 or xvda3, as boot would be xvda1. Second: root was on an LVM not a xvda.

So, I reimaged the VM using an image file rather than LVMs. Worked. How strange.

Okay, a couple important issues here: This is Xen 3.0 (as explained in an earlier post), but I don't think that is the prime factor. I am trying to put the image on an logical volume, on a software raid. I know LVM images work fine under 3.1, and others having ti running on 3.0, but I've found no one else trying the three together. It seems the three just don't seem to play well with one another.

Tuesday, September 02, 2008

Mounting A Xen LVM Image

How to access a VM's logical volumes from Dom0. This assumes the VM is powered off and its image resides on an LVM (LVM's inside an LVM). It will not access /boot or the MBR.
Attach and confirm the image
# kpartx -a /dev/volume/domain
# kpartx -l /dev/volume/domain
Acquire and access the logical volumes
# vgscan
# lvdisplay | grep -i "Name\|Status"
# vgchange -ay
# lvdisplay | grep -i "Name\|Status"
Mount and view the volumes
# mount /dev/VolGroup00/LogVol00 /mnt
# cd /mnt
# ls -l
# cat /mnt/etc/hosts
# df
Umount, inactivate, and detach the volumes
# cd
# umount /mnt
# vgchange -an
# kpartx -d /dev/volume/domain
The single biggest "gottcha" is the need to ensure that Dom0 and the DomU's don't use the default volume group name.

Xen Error (22, "Invaid argument")

Occasionally, with enough regularity to be very annoying, a Xen guest that installs fails to start, throwing the error:
libvir: Xen Daemon error : POST operation failed: (xend.err "Error creating domain: (22, 'Invalid argument')")
error: Failed to start domain valhalla
I had little luck finding the nature of Error 22, until I hacked the code.

The error descriptions are listed in the phython control script, but are not passed to the user. All we get is Invalid argument. Gee, thanks.
# tail -70 /usr/lib/python2.4/site-packages/libvirt.py
--------- snip ---------
VIR_ERR_NO_SOURCE = 19
VIR_ERR_NO_TARGET = 20
VIR_ERR_NO_NAME = 21
VIR_ERR_NO_OS = 22
VIR_ERR_NO_DEVICE = 23
VIR_ERR_NO_XENSTORE = 24
VIR_ERR_DRIVER_FULL = 25
--- output truncated ---
Notice number 22: No OS. Okay, that I can deal with. It can't find the boot sector. All we need is a virtual rescue disk.

Stay tuned.

Disappearing Xen Config Files

An interesting "feature" of Xen 3.0 is that when you issue:
virsh undefine domain
it executes:
rm -rf /etc/xen/*domain*
This means you can't keep revisions of the config file in the directory.

BTW: Yes, I know 3.0 an old version, but I'm still running FC6 on my infrastructure hardware. Why? Because if it ain't broke... don't fix it. My Fedora 7 Virtualization server has 3.1, which seems to be current as of this writing. Again, no reason to go to Fedoras 9, if its running 3.1, also.

Disconnecting SSH Xen Console

I SSH's into a server then launched a Xen console, and found myself stuck. Here was the solution:
Fedora Core release 6 (Zod)
Kernel 2.6.18-1.2798.fc6xen on an i686
valhalla.terran.lan login: Ctrl-]
[root@baltar xen]#
The key sequence of Control and Right Bracket got me out of the Xen session, but kept me in the SSH session.