Wednesday, January 04, 2006

TILDWSB: LVM on Software Raid

From the Things I Learned During Winter Solstice Break department, it turns out that LVM on Raid is not as straight forward as it may seem. I had thought I would migrate one of my systems remotely, but decided it would be smarter to try it on a development box first. Lucky me for being prudent, as it did not work the first time.

The server had software raid running across two drives, containing /home. I wanted to get /var/spool/mail on the raid, also, and felt LVM seem a good solution. I realized it would be destructive, so I did a backup and moved it offsite. I then replicated the hardware configuration, and restored the backup onto the development server.

First, I issued pvcreate /dev/md0, which (as I expected) corrupted the ext3 filesystem. Second, I created the new volume group with vgcreate vg1 /dev/md0 (I already had a vg0). Third, I set up the new home filesystem with lvcreate -L +256M vg1 -n lv.home. Everything is still on track.

Since pvcreate had wiped out the ext3 filesystem, the lv.home had to be reformated. This required mke2fs -j /dev/vg1/lv.home. Oh no! An error!
    mke2fs: bad fragment size - /dev/vg1/lv.home
Good thing this wasn't the production system. Unfortunately, this didn't seem to have any bearing on the actual command, as I had not specified an block sizes at all.

Turns out, there is a problem building an LVM on an existing raid. The solution was to dismantle the LVM, stop the raid, use fdisk to delete the partitions, then start over. This time, I did not format the raid. On the second attempt, the logical volume formatted without error.

No comments:

Post a Comment