Thursday, June 25, 2009

Scaffold in Chinatown, Pt 4

Maybe David Copperfield is going to make it disappear.

Tuesday, June 23, 2009

Sculpture Garden Fountain

In the background is the National Gallery rotundra and Washington Monument.

US Capitol

From the window of the commuter bus... Which I got to ride for free. Yesterday's driver forgot his hole punch, and used a pen to mark out the ticket. Today's driver punched over yesterday's mark.

Friday, June 19, 2009

Scaffold in Chinatown, Pt 3

This is getting exciting. Turns out the crew is working at night. About 8pm they close down H St. How truly civilized.

Xen error: xc_dom_find_loader

One of my Xen virtualization platforms crashed yesterday, throwing the error:
Error: (2, 'Invalid kernel', 'xc_dom_find_loader: no loader found\n')
Had the hardest time figuring out the cause.

Then, for whatever reason, I did a df -h and found my the /var/lib/xen mount was at 100%. I wiped out the contents of the save subdir, and moved all the boot_ files to /tmp. Suddenly, all my VMs could start.

Tuesday, June 16, 2009

Scaffold in Chinatown, Pt 2

They've added another layer of scaffold, but haven't actually done anything. It must be the same crew that has about half of all the subway escalators offline... you never actually see them working.

Friday, June 12, 2009

Pennsylvania Avenue

At the Nation Archives, looking south east toward the Capitol.

Scaffold in Chinatown

Looks like somebody is going to work on the Chinatown arch. I'll have to keep an eye on this.

Wednesday, June 10, 2009

Running Firefox Across an SSH Connection

I've always been annoyed by the Firefox "feature" of executing locally, even if initiated remotely. Consider this situation: You are running Firefox on your Linux workstation, and SSH into a server. On the server's command line, you launch Firefox. Rather than opening a new window which is X-tunneled back through SSH, it opens a new tab on the local Firefox. Argh!

Here's the solution:
[bungerd@lnxqa20 ~]$ MOZ_NO_REMOTE=1 firefox
[bungerd@lnxqa20 ~]$ firefox -no-remote
Either works, but the second seems more intuitive.

Monday, June 08, 2009

The Library of Congress


It was a nice day. I looked like a nice picture.

Sunday, June 07, 2009

Power Sources For DC-Baltimore Area

I got this in my power bill this week: a break down on the origins of the electrical power supplied by BGE (Baltimore Gas and Electric, which is owned by Constellation Energy.) The good news is very little oil. I already knew a significant portion was nuclear.

What is interesting is the Renewable energy column:

No solar. We've got a hundred miles of shoreline, and no wind power. Oh, and I love that wood fired generators are considered under the renewable energy category. The best part of wood power is that we consume forests, thus reducing our nature carbon filter, and discharge carbon at the same time. How very efficient.

This chart is a perfect example of one of the single biggest reasons expounded for NOT using solar of wind power: Politicians have claimed for decades that we should not use solar or wind, because we can't run cities like DC on solar and wind. Therefore, it is worthless technology.

I'm going to hand key the stats below to seed the search engines:
Coal 51.2%
Oil .3%
Natural Gas 6.4%
Nuclear 33.2%
System Mix 4.3%
Renewable Energy
* Captured Methane Gas .3%
* Geothermal 0%
* Hydroelectric 2.8%
* Solar 0%
* Solid Waste .1%
* Wind 0%
* Wood or Biomass 1.5%

Monday, June 01, 2009

Unraiding a Mirrored LVM Physical Volume

Here's the problem I've got: I have a mirrored RAID-1 that acts as the physical layer for an LVM volume group. One of the disk elements in the RAID mirror needs to be permanently removed. We could simply fail that element and be happy, but the question is whether I can convert the LVM to a partition without effecting the logical volume.

Be fore warned: we'll need to boot to get this to work.

Initial config:
[root@gfs1 ~]# ls -l /mnt
total 16
-rw-r--r-- 1 root root 0 2009-06-01 11:32 hello-there
[root@gfs1 ~]# pvscan
PV /dev/sda2   VG VolGroup00   lvm2 [4.88 GB / 2.59 GB free]
PV /dev/md0    VG changeme     lvm2 [4.99 GB / 3.99 GB free]
Total: 2 [9.87 GB] / in use: 2 [9.87 GB] / in no VG: 0 [0 ]
We want to change /dev/md0 to /dev/sdb1 without killing the hello-there file.

First, we remove the disk we want to keep from the array. We're doing this because the array is going to be destroyed.
[root@gfs1 ~]# umount /mnt
[root@gfs1 ~]# mdadm /dev/md0 -f /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0
[root@gfs1 ~]# mdadm /dev/md0 -r /dev/sdb1
mdadm: hot removed /dev/sdb1
To prevent the RAID from restarting at boot time, we need to change the partition type from FD to 8E. Use fdisk, per SOP, or...
[root@gfs1 ~]# echo -e "t\n 8e\n p\n w\n" | fdisk /dev/sdb

Let's we nuke the raid. To accomplish this we are going to change the partition type from FD to 83.
[root@gfs1 ~]# echo -e "t\n 83\n p\n w\n" | fdisk /dev/sdb
[root@gfs1 ~]# echo "y" | pvremove /dev/md0 -ff
Notice we are setting this to a standard partition rather (83) rather than LVM (8e). This is because we do not want the kernel initializing the partition in the next step. Now, we do our first reboot.

During the boot sequence, neither disk will be recognized as a RAID element. The disk we want to keep will be a physical volume, because we tagged it as 8E. The disk we want to loose will appear to be an unformatted partition.

When the boot is complete, log back in, and check the LVM status:
[root@gfs1 ~]# pvscan
One of two things happened: You got lucky and your volume is displayed-- jump to the last step. Nine out of ten times, you have to reinitialize the volume with the correct UUID. You did record the UUID before you started... didn't you?

Oops.

No worries. Try this: (remember your UUID will be different.)
[root@gfs1 ~]# vgcfgrestore changeme
Couldn't find device with uuid '7ZXhzB-Bsm0-w9be-cu57-EPDx-xk4Q-f9vHdv'.
Couldn't find all physical volumes for volume group changeme.
Restore failed.
[root@gfs1 ~]# pvcreate --uuid '7ZXhzB-Bsm0-w9be-cu57-EPDx-xk4Q-f9vHdv' /dev/sdb1
Software RAID md superblock detected on /dev/sdb1. Wipe it? [y/n] y
Wiping software RAID md superblock on /dev/sdb1
Physical volume "/dev/sdb1" successfully created
[root@gfs1 ~]# vgcfgrestore changeme
Restored volume group changeme
[root@gfs1 ~]# pvscan
PV /dev/sdb1 VG changeme lvm2 [4.99 GB / 4.89 GB free]
PV /dev/sda2 VG VolGroup00 lvm2 [4.88 GB / 2.59 GB free]
Total: 2 [9.87 GB] / in use: 2 [9.87 GB] / in no VG: 0 [0 ]
[root@gfs1 ~]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "changeme" using metadata type lvm2
Found volume group "VolGroup00" using metadata type lvm2
[root@gfs1 ~]# lvscan
inactive '/dev/changeme/keepme' [100.00 MB] inherit
ACTIVE    '/dev/VolGroup00/LogVol00' [2.00 GB] inherit
ACTIVE    '/dev/VolGroup00/LogVol01' [288.00 MB] inherit
[root@gfs1 ~]# lvchange -ay /dev/changeme/keepme
Mount the logical volume, and your done.