Monday, June 01, 2009

Unraiding a Mirrored LVM Physical Volume

Here's the problem I've got: I have a mirrored RAID-1 that acts as the physical layer for an LVM volume group. One of the disk elements in the RAID mirror needs to be permanently removed. We could simply fail that element and be happy, but the question is whether I can convert the LVM to a partition without effecting the logical volume.

Be fore warned: we'll need to boot to get this to work.

Initial config:
[root@gfs1 ~]# ls -l /mnt
total 16
-rw-r--r-- 1 root root 0 2009-06-01 11:32 hello-there
[root@gfs1 ~]# pvscan
PV /dev/sda2   VG VolGroup00   lvm2 [4.88 GB / 2.59 GB free]
PV /dev/md0    VG changeme     lvm2 [4.99 GB / 3.99 GB free]
Total: 2 [9.87 GB] / in use: 2 [9.87 GB] / in no VG: 0 [0 ]
We want to change /dev/md0 to /dev/sdb1 without killing the hello-there file.

First, we remove the disk we want to keep from the array. We're doing this because the array is going to be destroyed.
[root@gfs1 ~]# umount /mnt
[root@gfs1 ~]# mdadm /dev/md0 -f /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0
[root@gfs1 ~]# mdadm /dev/md0 -r /dev/sdb1
mdadm: hot removed /dev/sdb1
To prevent the RAID from restarting at boot time, we need to change the partition type from FD to 8E. Use fdisk, per SOP, or...
[root@gfs1 ~]# echo -e "t\n 8e\n p\n w\n" | fdisk /dev/sdb

Let's we nuke the raid. To accomplish this we are going to change the partition type from FD to 83.
[root@gfs1 ~]# echo -e "t\n 83\n p\n w\n" | fdisk /dev/sdb
[root@gfs1 ~]# echo "y" | pvremove /dev/md0 -ff
Notice we are setting this to a standard partition rather (83) rather than LVM (8e). This is because we do not want the kernel initializing the partition in the next step. Now, we do our first reboot.

During the boot sequence, neither disk will be recognized as a RAID element. The disk we want to keep will be a physical volume, because we tagged it as 8E. The disk we want to loose will appear to be an unformatted partition.

When the boot is complete, log back in, and check the LVM status:
[root@gfs1 ~]# pvscan
One of two things happened: You got lucky and your volume is displayed-- jump to the last step. Nine out of ten times, you have to reinitialize the volume with the correct UUID. You did record the UUID before you started... didn't you?

Oops.

No worries. Try this: (remember your UUID will be different.)
[root@gfs1 ~]# vgcfgrestore changeme
Couldn't find device with uuid '7ZXhzB-Bsm0-w9be-cu57-EPDx-xk4Q-f9vHdv'.
Couldn't find all physical volumes for volume group changeme.
Restore failed.
[root@gfs1 ~]# pvcreate --uuid '7ZXhzB-Bsm0-w9be-cu57-EPDx-xk4Q-f9vHdv' /dev/sdb1
Software RAID md superblock detected on /dev/sdb1. Wipe it? [y/n] y
Wiping software RAID md superblock on /dev/sdb1
Physical volume "/dev/sdb1" successfully created
[root@gfs1 ~]# vgcfgrestore changeme
Restored volume group changeme
[root@gfs1 ~]# pvscan
PV /dev/sdb1 VG changeme lvm2 [4.99 GB / 4.89 GB free]
PV /dev/sda2 VG VolGroup00 lvm2 [4.88 GB / 2.59 GB free]
Total: 2 [9.87 GB] / in use: 2 [9.87 GB] / in no VG: 0 [0 ]
[root@gfs1 ~]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "changeme" using metadata type lvm2
Found volume group "VolGroup00" using metadata type lvm2
[root@gfs1 ~]# lvscan
inactive '/dev/changeme/keepme' [100.00 MB] inherit
ACTIVE    '/dev/VolGroup00/LogVol00' [2.00 GB] inherit
ACTIVE    '/dev/VolGroup00/LogVol01' [288.00 MB] inherit
[root@gfs1 ~]# lvchange -ay /dev/changeme/keepme
Mount the logical volume, and your done.

No comments:

Post a Comment