Monday, December 27, 2010
An EC2 Conundrum
The Amazon cloud, known as Amazon Web Services (AWS), is billed on three meters:
* VM Resources, such as CPU and memory
* Storage, either block volumes or web based files
* Bandwidth, both into and out of the cloud
I have a set of VMs that I launch as needed, so I am not always billed for cycles or bandwidth. When I need the VM, I don't want to have to upload all the supporting applications and data, or go through a complex configuration procedure. The solution was to grab an Elastic Block Store (EBS), which looks to a Linux VM as a disk device. I provision the VM, connect the volume, log in, mount the device, where I have a set of scripts that rebuild the application server in less than 1 minute.
Here's where I got burned: The EBS is actually a LUN on a SAN, which resides in a data center, somewhere in the world. Amazon has four regions: Virginia, California, Ireland, and Singapore. I picked Virginia. But in Virginia, they have four data centers, called availability zones, labeled A, B, C, and D. My volume is in Virgina "B". Unfortunately, they have insufficient capacity in VA-B to launch a VM, as of about 21 December.
This means I've got stuff on a disk, somewhere across the Potomac, that I can't get to, because I don't have a machine to access it. I could launch a VM in VA-C or VA-D, but there is no native mechanism to allow VMs to mount disks that live in another data center. Thus the conundrum: How do we protect against this situation?
The answer is obvious: clustered replication. Two EBS volumes in different data centers, with one VM acting as the master node, and another VM acting as the replication node. Unfortunately, this doubles the cost of the system... From $15 a month to $30 a month. Not really that much... and that assumes my data is critically important, which it isn't.
But you'd think Amazon would have provided a way to prevent this from happening. After all, its not like me paying twice as much on a monthly basis is something they'd actually want to happen.
Saturday, December 25, 2010
Schug Pinit Noir
Th company gathering was at a high end restaurant, but it was served a dining room temperature. Once I chilled the bottle I got at home, I was much more satisfied. This is also a wine that truly needs to breath to mellow. Though not expensive, it is at the close to my high end for everyday wine. Out of 10, I'll go a strong 6, pushing 7, with the right preparation.
Sunday, December 19, 2010
A Second Apache Instance With YUM
To prevent the overhead of administering the second instance, I started investigating an old RPM option I'd never used: --relocate. It turns out this option was the opposite of what I had expected, in that it moved the first instance, rather than install a second. And besides, using RPM manually was only incrementally better than the original idea.
So what about YUM? There is an option for --installroot=/path. Seems like what I wanted: instead of distributing files based on system root to /etc, /usr, /var, lets put the httpd files in an tree under /opt. What happened when I ran the command surprised me:
yum install httpd --installroot=/optThis thing is not going to only install the httpd binaries, libraries, and config files... but every dependency... which already exists on the system! And its going to require 64M of disk space!
<snip>
Transaction Summary
=============================
Install 77 Package(s)
Update 0 Package(s)
Remove 0 Package(s)
Total download size: 64 M
Is this ok [y/N]:
Oh, wait... I've got like 250G of free space. Do I really care if it take 64M? No! And so, I answered "Y". What did we end up with?
ls /optOuch! That's ugly. Looks like the better (cleaner, prettier) choice would have been:
bin dev home lib64 mnt proc sbin srv tmp var
boot etc lib media opt root selinux sys usr
yum install httpd --installroot=/opt/httpd-2iFor the sake of simplicity, through the miracle of virtualization, lets just consider that fixed.
Lets see what we got:
/usr/sbin/httpd -vWhat about an update? I added a repo file to include the updates directory on the satellite server.
Server version: Apache/2.2.8 (Unix)
/opt/httpd2i/usr/sbin/httpd -v
Server version: Apache/2.2.8 (Unix)
yum update -y httpdAs expected for the base install, but no love from the second instance.
<snip>
/usr/sbin/httpd -v
Server version: Apache/2.2.9 (Unix)
/opt/httpd2i/usr/sbin/httpd -v
Server version: Apache/2.2.8 (Unix)
yum update -y httpd --installroot=/opt/httpd2i/Still no good. As a matter of fact, nothing seemed to work. So, as a workaround, I tried this:
Setting up Update Process
No Packages marked for Update
yum install httpd -y --installroot=/opt/httpd2i-2/In a nutshell, create a third instance, and copy the third instance over the second, hoping not to overwrite any configuration files in the process.
rsync -Pr httpd2i-2/* httpd2i/ --update
/opt/httpd2i/usr/sbin/httpd -v
Server version: Apache/2.2.9 (Unix)
rm -rf /opt/httpd2i-2
Does this solve the original problem? Sort of. Is it easier than recompiles? Its faster. Just one more problem... As is, the new Apache does not run. Looks like we need some more hacking. Stay tuned for part 2.
*** Update ***
On second though... I'll just recompile. It turns out there are some references to that path in the RedHat binaries. That's bad form on their part, and they should be ashamed, but by the time I figure out how to hack this, the recompile will be done.
So, no part two. Just snag the binaries and be done with it. That doesn't mean that this feature is useless. It just means that it didn't solve this problem.
Wednesday, December 15, 2010
Thursday, December 09, 2010
So, So, Sad: And its ITIL's Fault
Yesterday, I found out that I failed the fifth test. I was crushed! Not because I failed: I fail all the time. Constantly. As a matter of fact, I failed tests one and two, but those didn't bother me. Let me explain.
I call the testing format Three Little Kittens.
You get a case study.
Three little kittens, have lost their mittens.You must select "the best" solution, based upon four chioces:
1. They bought glovesThe operative factor in this process is the fact that we have to pick "the best" solution. The answers are weighted with scores of 5, 3, 1, and 0 points. In the case study above, the answers logically break out as follows.
2. They found their mittens
3. And they shall have no pie
4. Kittens don't wear mittens
1. Throw money at the problemThus, the 5 point answer is "2", the 3 point answer is "1", the 1 point answer is "3", and number "4" is worth nothing, even though it is completely accurate.
2. A definitive solution
3. Punishment does not solve the problem
4. True, but irrelevant
I failed the first two tests because I did not personally recognize the level of dedication that is needed for the certification track. Furthermore, the class vendor, Global Knowledge, has not done a good job of setting expectations. Embarking on this process requires either significant management and project experience, or the purchase of supplemental material and several weeks of study before the class.
This certification also requires complete support from your employer. They have got to be willing to give you the time and resources to succeed. They have got to recognize the value they will receive from this process.
I have scheduled to retake the class and tests for 1 and 2. After passing tests 3 and 4, I was very confident that I understood the testing method, and the amount of preparation needed before hand. My results for test 5?
50% of answers were 5 pointersFail!
12% of answers were 3 pointers
0% of answers were 1 pointers
38% of answers were 0 pointers
So, whose fault is this? Doesn't matter (see justification 3 above.) But I am sad.
Monday, December 06, 2010
Why Am I Changing Light Bulbs?
Yes, a thousand years, damn it. They said the bulbs lasted five times longer than "normal" bulbs. They said that even though they cost more, they don't really cost more when you consider the cost of all the bulbs you won't have to buy in the future. And you're saving energy.
Yeah, because they put out half the light of normal bulbs. One bathroom had a two bulb fixture. If you walked into the bathroom in the middle of the night, it took five minutes for the lights to power-up. (I was already "done" by then.) So, I changed one of the CF bulbs for a normal bulb. For short jobs, the incandescent bulb fires up giving us 50% light, and for jobs taking more than 15 minutes, the CF gets us up to 90%. And still saves energy.
But the CF bulb has burned out. Not the normal bulb. The CF that was suppose to last five times longer! What's the deal? It's almost like they lied to me... or something.
Oh now: lets not loose sight of what's important. I'm saving the planet. I'm being environmentally aware. The CF bulbs reduce my carbon footprint. And they contain deadly levels of mercury that is sufficiently toxic that improper disposal is criminal in some jurisdictions.
Now if you'll excuse me, I've got some endangered tigers that need to be shot.
Saturday, December 04, 2010
Oops, I've Seen All of Netflix
Okay, not every movie in the entire Netflix inventory, but every movie that I've ever wanted to see. I'm down to watching foreign flix with subtitles. I'm to the point that I'm saying to myself: "Hey, I don't think that was all that bad. Oh sure, I purposely avoided it when it came out at the theaters since it wan't worth spending money on, and I didn't watch it on HBO or TV since I had better stuff to do, but I'll add it to my Netflix queue."
Why? I because Netflix is a flat rate service. Its a buffet. All you can eat. There was an old Huey Lewis song:
"The sign on the door said all you can eat for $1.99, but one dollars worth was all i that could stand"(Like there is anything but old Huey Lewis songs.
So here's what I have left:
- The Good, the Bad, the Weird: A subtitled Chinese Western
- How to Train Your Dragon: What passes for a cartoon these days
- Daybreakers: Because the world needed another vampire movie
- Scott Pilgram vs The World: For the 80's video game references
Sometime in the future, they'll have:
- Despicable Me: I dream of world domination
- Inception: Speaking of dreams
- Skyline: Gotta keep an eye on the aliens
I hope somebody invents something Earth shatteringly entertaining... quick.
*Sigh*
Saturday, November 13, 2010
Fedora and Screen Resolution: Pt2
cat /etc/X11/xinit/xinitrc.d/setres.shNotice the scripts path.
#!/bin/sh
xrandr --newmode "1280x1024" 108.88 1280 1360 1496 1712 1024 1025 1028 1060 -HSync +Vsync
xrandr --addmode VGA-0 "1280x1024"
xrandr -s "1280x1024"
Fedora and Screen Resolution
I don't know exactly when the change occurred, but starting with F13, my system always booted to 1024x768. Back in the day, we'd use system-config-display to set the proper resolution, but it is no longer a default component, and has become unreliable. Instead, we should use the less intuitive xrandr. So, here's what it takes to override the default resolution.
First, determine what xrandr sees. From within an X desktop environment, open a terminal window.
xrandr -qIt knows that Screen 0 (the monitor) is capable of 2944x1024. For some reason, VGA-0 (the video card) is defaulting to 1024x768 as indicated by the asterisk.
Screen 0: minimum 320 x 200, current 1024 x 768, maximum 2944 x 1024
VGA-0 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm
1024x768 60.0*
800x600 60.3
640x480 59.9
Second, we need to configure more video modes. This requires that we feed xrandr a huge amount of information we don't have. Luckily, gtf knows what we need. Let's try to bump the resolution up one notch to 1152x864 with 60Hz refresh rate.
# gtf 1152 864 60We need the second line to feed to xrandr:
# 1152x864 @ 60.00 Hz (GTF) hsync: 53.70 kHz; pclk: 81.62 MHz
Modeline "1152x864_60.00" 81.62 1152 1216 1336 1520 864 865 868 895 -HSync +Vsync
xrandr --newmode "1152x864_60.00" \Our new mode has been staged, now it has to be connected to a video card, and activated:
81.62 1152 1216 1336 1520 864 865 868 895 -HSync +Vsync
xrandr -q
Screen 0: minimum 320 x 200, current 1024 x 768, maximum 2944 x 1024
VGA-0 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm
1024x768 60.0*
800x600 60.3
640x480 59.9
1152x864_60.00 (0x7e) 81.6MHz
h: width 1152 start 1216 end 1336 total 1520 skew 0 clock 53.7KHz
v: height 864 start 865 end 868 total 895 clock 60.0Hz
xrandr --addmode VGA-0 "1152x864_60.00"If you're lucky, you can still see your video output!
xrandr -q
Screen 0: minimum 320 x 200, current 1024 x 768, maximum 2944 x 1024
VGA-0 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm
1024x768 60.0*
800x600 60.3
640x480 59.9
1152x864_60.00 60.0
xrandr -s "1152x864_60.00"
xrandr -q
Screen 0: minimum 320 x 200, current 1152 x 864, maximum 2944 x 1024
VGA-0 connected 1152x864+0+0 (normal left inverted right x axis y axis) 0mm x 0mm
1024x768 60.0
800x600 60.3
640x480 59.9
1152x864_60.00 60.0*
Oddly, it would seem the part in quotes is completely arbitrary, but GNOME's desktop resolution utility needs that field to be in the 1234x567 format. The real headache is trying to determine the available resolutions and scan rates. If you really feel daring you can pump all the possibilities into the list and try them until you find one you like. Here's the list:
for J in `elinks http://bunger.us/rez.xml`; do \And here's the sledgehammer that populates the list:
X=`echo $J | cut -dx -f1`; Y=`echo $J | cut -dx -f2`;\
for K in 60 75 80 120; do \
gtf $X $Y $K | grep Modeline; \
done; done
for J in `elinks http://bunger.us/rez.xml`; do \
X=`echo $J | cut -dx -f1`; Y=`echo $J | cut -dx -f2`;\
for K in 60 75 80 120; do \
L=`gtf $X $Y $K | grep Modeline | cut -d\ -f4-99`;\
M=`echo $L | cut -d\ -f1`; \
xrandr --newmode $L; xrandr --addmode VGA-0 $M;\
done; done
Monday, October 25, 2010
Worst Day Ever!
Just more than an 1/8 tank of gasYeah... I can make it... Twenty miles to the Metro station. I'll get gas on the way home.
Got to the Metro station okay.
But I'd forgotten my Smartrip automated train ticket!Can't very well drive thirty minutes home and thirty minute back to get it, I'll have to buy another. All the Smartrip kiosks are down. Only paper tickets available.
No big deal. I can buy a paper ticket. One problem... You can't pay for parking with a paper ticket, only with the automated Smartrip debit card. No cash... Only the single purpose debit card. And, no, I don't know why you can't pay for parking with a paper ticket.
Okay. I buy a day pass for $9.00 and will get a Smartrip card when I get to downtown. Turnstile won't accept my paper ticket. Everybody else is getting through? I walk toward one of the attendants, but don't even get a chance to open my mouth. She says:
9:30Oh yeah, that's right... Day passes are only good 9:30AM to midnight. So I buy a $10 round trip ticket. And get on the train. And ride to downtown. And get off the train. And look for a kiosk to buy the Smartrip card. And it's down.
No time to worry, got to walk to class. What was the address? It's in my smart phone, in my case...
Crap, left my phone at home.I've been to the building before, just look for landmarks. There it is. Made it on time.
But the vendor doesn't have custom curriculum for this particular class. I did bring my copy of the manual from the British Office of Government and Commerce, but the material they are using is from the Netherlands. It was translated from the Queen's English, to Dutch, then back to English. And the instructor is Indian. So much for all that time I spent learning German. And Italian.
So at lunch, I use the day pass to ride to Metro Center and buy a Smartrip preloaded with $5 to pay my parking fee of $4.75. And I didn't run out of gas.
So I guess the day wasn't that bad.And sorry for the plagiarism, Jeff.
Wednesday, October 20, 2010
Remove XML Comments with sed
sed '/<!--/,/-->/d' /target/file.xmlUse sed -i save the changes back into the file.
Sunday, October 10, 2010
Richard St. John's 8 Secrets Of Success
Of late, I've been trying to absorb some soft skills and have found the TED initiative to be quite entertaining. Here's one the hit home, because it was the most concise definition of successful habits and behaviors I've come across for free.
As a side note, we may want to debate whether or not I'm a valid judge of successful habits, as I am neither rich nor famous, and as not I am not successful. Well, I assure you I am quite satisfied with my anonymity. As for the money... I have complete trust in Wall Street, Congress, and Social Security.
Oh... My paraphrased, bottom line, for those of you with an attention span of less than three minutes:
Focus on the passionate and persistent pursuit of ideas that will allow you to push yourself to do work you believe will serve people for good.
Il Cortigiano Prosecco
That explains it; and so we learn. Great price, 6 of 10.
Monday, September 20, 2010
The 10 best IT certifications: 2010
There’s no double-blind statistically valid data analysis run through a Bayesian probability calculus formula here. I’ve worked in IT long enough, however, and with enough different SMBs, to know what skills we need when the firm I co-own hires engineers and sends technicians onsite to deploy new systems or troubleshoot issuesIn other words: I made all this up, based on my personal opinions.
And what qualifies me to question his greatness? After all, he:
...is president of two privately held technology consulting companies. He previously served as executive editor at TechRepublic.Me? I'm a nobody.
Well... I maybe a nobody, but at least I'm smart enough to do enough basic research before attaching my name to an article. News flash: there is no such certification as the RHCP. Its RHCE, you ass-clown.
Have you guessed that his article talked down the value of the RHCE, yet? He also questioned the worth of VMware and ITIL. What is his logic?
Microsoft owns the market.In other words, this guy one of those people that walks in, sells you a bunch of Microsoft crap, then walks away midway through the project, leaving those of us that have actually stayed current with technology, to come in a clean up his mess.
Well, TechRepublic just made my proxy server's blacklist.
Thursday, September 16, 2010
Perl Taint Mode Regex
#!/usr/bin/perl -TNow input's must be laundered:
$validate=$form{"code"};The first line reads the input, but the input is untrusted. In the conditional, we compare the variable against a pre-defined, expected format. If the input matches the format, the variable is set back to an value. If the validation fails, the script befalls a brutal and senseless death... Which is better than being compromised or exploited.
if ($validate =~ /^(\w*)(3|5|7)(\d{3})$/){
$validate=$1.$2.$3; }else{
die "can not validate"; }
The trick to this process is understanding how to format the Regex and understanding how it is laundered. First, the format pattern is not regex. Sure, all the docs say it is... but its not. So, here's what you need to know:
the format sits between / /Let's look at the example above:
the ^ and $ are anchors, as regex
the ( ) encloses checks
the checks are numbered
the first check is $1, second is $2, etc
if there is an | in a check, its and "or"
the * is a wildcard count
but {3} says exactly 3 characters
^(\w*)(3|5|7)(\d{3})$Start at the beginning, get an infinite number of \w characters and assign them to $1. Look for a 3 or a 5 or 7 and assign it to $2 (second set of parens, thus second check.) The last check ensures that the last three characters are digits. Remember that regex is "greedy", so effectively, this expression is evalutated backwards.
Now that you validated the input against the pattern, reassign the checks ($1.$2.$3) back to the variable. This nukes whatever badness the evil doer tried to impose on you. Do this for every variable you read in, and then destroy the input array, to ensure that no lazy developer slides a new form value into the script without validating.
To test match patterns from Bash, try this one-liner:
perl -e 'if("test01" =~ /^(\w{4}\d{2})$/ ){ \Simple, huh? Let's do an e-mail address:
print "+ $1.$2.$3.$4.$5"}else{ \
print "- $1.$2.$3.$4.$5"}'; echo
perl -e \Ouch! (BTW: Bash made me escape the @ symbol.)
'if( "xxx\@yyy.us" =~ /^(\w{1}[\w\-\.\_]+\w\@)(\w{1}[\w\-\.\_]+\.)(us|com|net)$/ ){\
print "+ $1.$2.$3.$4.$5"}else{
print "- $1.$2.$3.$4.$5"}'; echo
For a break down on all the pattern matches check out Steve Litt's Perls of Wisdom.
Monday, September 06, 2010
Apache Modules for Basic Autehtication
auth_basicYou'll also need authz_host, but that's probably already in place to support Allow/Deny.
authn_file
authz_user
authz_default
Thursday, September 02, 2010
Elegant Log Compression
How about this:
cd /some/path/logsThe beauty of this is the embedded execution statements.
for J in `ls *log?$(date +%Y)-*(expr $(date +%m) - 1)-*`; do
ls -lh $J; tar -czf $J.tgz $J; ls -lh $J.tgz; mv $J /dev/shm;
done
Within the backticks, are a pair of executions, one of which nests an execution.
This particular incarnation compresses last months logs. It shows the original size and the compressed size, then moves the file to a holding directory. On a real machine, I'd probably change the middle line to:
tar -czf $J.tgz $J; rm -f $JPop this in a cronjob and run it at "1 2 3 * *".
Sunday, August 22, 2010
every: command not found
But the doc pointed out that the user "root" might not use the bash shell. What if he was a psychopath and used csh? In that case you'd have to look for a bunch of .c* files. I immediately realized the best thing to do was to wildcard this task:
chmod 600 /root/.*And then I moved on.
But within a few minutes... something was wrong. In another window, as user "doug", I tried to list a directory.
-bash: ls: command not foundWhat? I couldn't even list my home directory. As a matter of fact, I couldn't execute any command.
This is where your heart kind of skips a beat. As root I could list anything, including /bin/ls. So as root, I tried to switch to user doug:
su: warning: cannot changeto directory /home/doug:Oh crap!
Permission denied
su: /bin/bash: Permission denied
Eventually it occurred to me. Consider this situation:
ls -a /rootWhen I used the dot-splat wild card, it must have picked up dot-dot, which would be the root directory. From there it probably reset the permissions on /bin, /etc, and so on. I just needed to reset those perms to 755.
. .. .bashrc .bash_logout .bash_profile
No luck. As a matter of fact, every directory seemed to be correct. I could not see any permission that was wrong. And then... in a stroke of unparalleled genius, I tried something else. I looked at the only set of directory permissions you can never see:
# ls -ld /What about:
drw------- 24 root root 4096 Aug 22 22:51 /
# chmod 755 /And now everything works.
# ls -ld /
drwxr-xr-x 24 root root 4096 Aug 22 22:51 /
Whew.
In the end, however, I do have to admit one thing: The system was significantly hardened. Hard as a brick!
Installing OpenSSH from Source
When you visit the OpenSSH website you want to get the portable source. I downloaded that onto the box and extracted it to /usr/local to create the openssh-5.1p1 sub. Normally the README file explains the compile sequence but in this case I had to get the instructions from the FAQ.
The first few attempts failed until I installed zlib-devel and openssl-devel. Then it was a simple case of the standard:
./configureThis placed the binary in /usr/local/sbin, but messed up the etc structure.
make; make install
All the config files were in etc and not a sub, so I created /usr/local/etc/openssh and moved all the files into the sub. This required an update to the sshd_config, however. I had to edit he HostKey parameters to include the sub in the path.
To test, we execute:
/usr/local/sbin/sshd -Dd \Connect from remote. Test the keys. Bug gone. All good.
-f /usr/local/etc/openssh/sshd_config
Now to symlink everything
cd /etc/This gives us a SysV startup script that points to the correct config files, but the wrong binaries. We need to change all the /usr/ entries to /usr/local/:
mv ssh ssh-redhat
ln -s /usr/local/etc/openssh ssh-openssh
ln-s ssh-openssh ssh
ls -ld ssh*
cd /etc/init.d
cp sshd sshd-openssh
mv sshd sshd-redhat
ln -s sshd-openssh sshd
sed -i "s~/usr/~/usr/local/~" sshd-openssh(There's actually only two lines, and the first shouldn't count.)
Oddly, on first try, it fails. The reason is that RedHat built the SysV script to check for the path of the config, but didn't provide the path. This means it fails and uses the default. Since we moved the config... it fails. The solution, which makes everything portable is the put the config path where RedHat expects it:
echo 'OPTIONS="-f /etc/ssh/sshd_config" ' > /etc/sysconfig/sshdOptionally, recompile with the --sysconfdir=/etc/ssh such that both binaries point to the same sub.
One downside is that the binary is running unconfined by SELinux. If you're really ambitious:
chcon -t sshd_exec_t /usr/local/sbin/sshdRestart the service to confine.
chcon -u system_u /etc/init.d/sshd*
chcon -t initrc_exec_t /etc/init.d/sshd*
Saturday, August 21, 2010
Casa Santosola Barbera D'asti
I went shopping for a Barolo, but couldn't find one in my price range. This was in the Piedmont section, so I gave it a try. It was a very good wine, but... it was not sufficiently different from a so many other Italian reds. When I looked it up on the chart, I found that the grape, the Barbera, is right next to Sangiovese.
The product was good, the price was good, but this one just did not stand out. 6 of 10.
Kim Crawford Sauvignon Blanc
I've recently seen several adds for Kim Crawford's wines, and saw a few positive reviews, so I went about $5 out of the budget and grabbed this Sauvignon Blanc. A few points: Kim also has chardonnay, but go with the sauvignon, since the vineyards are in Marlborough, New Zealand. And as we know, if your doing New Zealand, your doing screw top.
The verdict? You know how snooty wine reviews talk about "hints of pear"? This one takes aroma of pair and smacks you up side the head with it. There is no doubt about the pair flavor.
Unfortunately, I not real big on fruity wines. Decant this one and let it breathe. (That way no one else sees the screw cap.)
6 of 10
Valley of the Moon Chardonnay
After reading an interesting article about California versus European wines, I decided I give a few a try. The gist of the article was the thought that a 90 point wine was a 90 point wine regardless of its point of origin. This is to say that Californians are judged by the same standards as Europeans. As a result, if a California wine is highly rated, it should meet the same standards as its European counterparts.
I've always found Californians to be too flamboyant. The wines, that is. The people that run the wineries in California are always doing stuff to the wine to make it exciting. They want it to be memorable, but usually end up making it just plain bad. Europeans don't do that-- they let wine be itself, and enjoy it for what it is meant to be.
But this one was a 90 point chardonnay, and it was in my price range. And it was wine. Just wine. Not a bunch of pretentious flavoring to enhance the wine drinking experience. Just a good glass of wine. Not great... which was disappointing for a 90 pointer. If it had been unrated, it would have been a 7.
I'm going to say its a 6 of 10
Thursday, August 05, 2010
Bruised, But Better
It was nearly black at the doctor's office, but moving around has helped circulated the blood and get it to a nice purple.
The cover story is that I fell down the stairs. I didn't figure the insurance would pay if they knew how this really happened. And if I admit what really happened, I'd have to explain why an almost 50, out of shape, computer nerd decided to take up street fighting and kickboxing as a hobby.
Monday, August 02, 2010
Breaking Your Foot is No Fun
No drugs... and they made me walk out of the ER. You bastards!
Now if you'll excuse me, I'm off to update my Facebook status and tweet the news.
Yeah, right.
Tuesday, July 27, 2010
Who In The World Isn't On Facebook
Seriously ... at this point, who's not on Facebook?Seriously: That was the lead line.
Did you hear that? That was the sound of Edward R. Murrow coughing up a lung in disgust at the state of what is now called journalism. (And don't even get me started on Fox!)
The article reports that "Facebook CEO Mark Zuckerberg announced that the site hit a half-billion active users" which is a total lie. Did they not notice his pants on fire? Half a billion active users? NFW. Half a billion accounts, 30% of which haven't logged in a year, 20% of which are fake profiles used by thieves, and 10% of which are husbands claiming to be single. That leaves maybe 200 million, and that's being generous.
I though about my friends--
technical people, hackers, nerds: not on Facebook
professionals contacts: not on Facebook
the six siblings I acknowledge: only one on Facebook
mother or father: not on Facebook
step-mother or step-father: he's on Facebook
(he friended my brother, never used the account again)
kids: on, one has four posts since 2009, the other six
So, in the end. Maybe 3% to 5% of all the people I know are on Facebook. As for CNN? I think they've just let the 995,554 people that like their page go to their head.
Saturday, July 24, 2010
Thanks for Visiting: Script Kiddy
Well, I've got this little VM floating around the clouds of the internet. Nothing exciting. It hosts http://dougbunger.com, which is mostly 404 pages and dead links. But... its my little cloud VM, and I love it.
So all week long, somebody has been slamming my server, trying to hack in. Why? There's nothing of value. Not quite true: chances are, if they were to compromise my server, they would probably use it as a file drop for pirated media or pr0n. (And not the good kind... of either.)
I don't think its the Chinese: they are too busy hacking Google to read their dissident's email. No, its the Script Kiddies. How do I know? They are hitting the server with thousands of PHP and SQL exploits. Unfortunately, the server has neither. So, I implemented an Apache redirect:
AliasMatch ^$ /var/www/html/index.htmlI inserted two lines that evaluate the URL and redirect anyone that ask for anything containing PHP or SQL to another website. My regex was not sufficiently righteous, and redirected blank URI's, so the first line ensures you get an index page.
RedirectMatch (.*[pP]+[hH]+[pP]+.*) \
http://english.cpc.people.com.cn
RedirectMatch (.*[sS]+[qQ]+[lL]+.*) \
http://english.cpc.people.com.cn
And where does something like http://vypress.bunger.us/sql.php redirect? Why to the Chinese Communist Party home page, of course. Their people are trained for this kind of thing. I'm sure they will appreciate the practice.
Wednesday, July 21, 2010
I'll Take One Electric Sikorsky, To Go
(Assuming my IRAs ever get anymore valuable then a happy meal.)
Sunday, July 11, 2010
Moinet Prosecco
This would be good event wine, but is a little too bubbly for everyday use. On my scale, it gets a high 7, because of price. If price is no object, an 8 for sure.
Monday, July 05, 2010
Witness to a Moment of Innovation
Again, not interesting, since I get Netflix (et al) DVD's in the mail a couple times a week. This was for a new TNT series called Rizzoli And Isles. As a promotional gimmick, TNT sent the pilot episode on DVD for preview of the July 12th debut.
Imagine if we started getting DVD's in the mail as often as we use to get AOL CD's. Unfortunately, once the trend catches on, most of the DVD's will be crap... Just like AOL.
Saturday, July 03, 2010
Browser Based SSH via Webshell
Yeah, I'll admit it sound's pretty far fetched, but I have found an ultra cool package that could provide exactly such an emergency functionality: Webshell 0.9.6 It runs as a local python service and allows login via an AJAX enabled browser.
Behind the scenes, the browser client communicates with the python service, and the python service acts as an SSH client to access the local SSH service. On the surface, this could be a problem, as the browser to python connection would normally be unencrypted. This issue can be mitigated by install OpenSSL support for python. Unfortunately, the pOpenSSL package wasn't in my Fedora repo, so I had to grab it from Pbone.
I made a couple tweaks to my install. I changed the port from the default 8022:
sed -i "s/8022/???/g" webshell.pyAnd since we always change the SSH port of outside servers:
sed -i "s/in +' loc/in +' -p ???? loc/" webshell.pyAnd added some headspace to the top of the page:
sed -i "s/margin:0;/margin:25px 0px 0px 0px;/" \And changed the font from 10 to 12:
www/webshell.css
sed -i "s/font:10/font:12/g" www/webshell.css
Once you change the font size, you'll need to change the default background or remove the JPG for solid black.
The documentation is a little unclear on the fact that the program, by default, only listens on 127.0.0.1, so you have to launch the script with -i 0.0.0.0 to accept outside connections. Of course, you'll have to build your own SysV start script.
A side note, there are websites that run this program as a free service to let you web into their server, then hop over to yours. You probably don't want to use those free services. Sure, its SSL from you to them, and SSH from them to your server, but what's the protocol that encrypts the link between the SSL and SSH? can you say none?
Archiving Solaris... Forever!
To setup your user environment, add to ~/.profile
export PS1="\w #"Man... I hope I never have to support Solaris again.
export PAGER=less
export TERM=ansi
alias vi='vi +"set showmode ignorecase" '
export EDITOR=vi
Wednesday, June 23, 2010
Compiling Apache Without Default Modules
To compile a slim, modular Apache, use:
./configure --enable-mods-shared=all --with-mpm=prefork \But this raises an interesting question-- what if we actually want to statically compile some, but not all, modules? Maybe we want a dedicated proxy/balancer:
--disable-deflate
make; make install
/usr/local/apache2/bin/httpd -l
Compiled in modules:
core.c
prefork.c
http_core.c
mod_so.c
./configure --enable-mods-shared=all --with-mpm=prefork \And that's what we are looking for. We'll need to get SSL on this puppy, but it's bed time, so go to sleep.
--disable-deflate --enable-proxy=static \
--enable-proxy-ajp=static --enable-proxy-balancer=static
make; make install
/usr/local/apache2/bin/httpd -l
Compiled in modules:
core.c
mod_proxy.c
mod_proxy_ajp.c
mod_proxy_balancer.c
prefork.c
http_core.c
mod_so.c
Friday, June 18, 2010
Off Peak Energy Usage
You know what... I don't care! It doesn't save me any money.
I get charged a flat rate, nights, days, weekends. How is my washing machine choosing to wash my clothes later because it costs less helping me? It's not! Who is it helping? The global ecosystem? BRRRRAP! Wrong answer, you naive twit-- the power company's profit margin. That's it, nobody else.
Cost savings are not passed to the consumer, they pad the Wall Street coffers. If the power companies had invested in their infrastructures over the past four decades, we'd already have a smart grid. But noooooooo. They pocketed the profits and have left the consumers to deal with their short-sightedness.
So, when I'm ready to was my clothes, I'm doing it. If I overload the grid, and brown you out, too bad. You should have installed a point of use energy system... or at least a UPS. Like me.
Wednesday, June 16, 2010
Another Tomcat Post :: SSL (Part 2)
This entry is a follow up to a post a few days ago quaintly titled Another Tomcat Post :: SSL. Since that post, I have made a momentous discovery regarding Tomcat encryption.
There is an annoying error message that writes to catalina.out on Tomcat restart, that it turns out is relevant to why this SSL has been such a mess.
Jun 16, 2010 8:55:14 PM org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path:This is telling us that we are not using packages optimized for Tomcat. The message can be cleared by installing the tomcat-native RPM:
yum install -y tomcat-nativeOn restart, the irritating little message is gone.
One of the things that this package does is to include an updated crypto stack for Tomcat that includes x509. Glory to the mighty Gods of Olympus! Once installed, we modify the SSL stanza in the server.xml file:
maxThreads="150" scheme="https" secure="true"We've removed the keystore directives and used commands to point to our x509 cert and key. Restart Tomcat.
SSLCertificateFile="conf/custom.crt"
SSLCertificateKeyFile="conf/custom.key"
clientAuth="false" sslProtocol="TLS" />
To test:
echo | openssl s_client -connect localhost:8443 | \If you are not using RPMs, but Apache's Tomcat release, look in CATALINA HOME's bin directory for a tar file.
grep subj
Sunday, June 06, 2010
XenServer System Alerts From the Future
Thursday, June 03, 2010
Another Tomcat Post :: SSL
Okay, simple stuff, first. When you install the mod_ssl RPM, it creates a dummy cert. Lets nuke it and create our own:
cd /etc/pki/tlsAnd the CSR get sent to the non-existant CA... So, fudge it:
mv private/localhost.key private/localhost.key.rpm
mv certs/localhost.crt certs/localhost.crt.rpm
openssl genrsa 2048 -out custom.key
openssl req -new -nodes -subj /O=doug \
-key custom.key -out custom.csr
openssl x509 -noout -text -signkey custom.key \Distribute:
-in custom.csr -out custom.pem
cd private; mv ./custom.key .And, yes, memorize *all* that crap.
ln -s custom.key private.key; cd ..
cd certs; mv ../custom.{csr,pem} .
ln -s custom.pem custom.crt
ln -s custom.pem localhost.crt
service httpd reload
Test httpd:
echo | openssl s_client -connect localhost:443 | \
grep subj
Getting Tomcat to work with SSL reminds me of the chorus from an "Offspring" song, Stuff Is Messed Up.
Tomcat requires our cert be converted from x509 to pkcs12. This is not difficult, but there are two critically important issues with the following command. The assigned name must be 100% unique across all files in the working directory. As such, make sure you do this next section in an empty directory.
The second issue is that you will be prompted for a password. It must be more than six characters, even though it will accept smaller, including NULL. Would it surprise you to hear that ultimately your password is going to be coded on the system in clear text? The default clear text password is "changeit".
openssl pkcs12 -export -name unique \Ready for another puzzle? Tomcat needs another component called a keystore. Beware: This command assumes your goal is to compile all the pkcs12 files in the working directory. Wait-- Don't assume I said something that I didn't: the command does not source *.p12, it evaluates all the files in the directory and if a file is a pkcs12 file, it compiles it. That's why we're in a nearly empty directory.
-in /etc/pki/tls/certs/custom.crt \
-inkey /etc/pki/tls/private/custom.key \
-out custom.p12
And remember the password we entered a moment ago? The keystore's password must match. Oh... and the Tomcat SSL documentation is wrong.
keytool -genkeypair -keystore custom.jks \Notice that there is no -in and what was a pkcs12 name is now the jks alias.
-alias unique -dname O=doug
*** Updated 6/16/2010 ***
I have since learned the steps above do not work as I thought. The correct next step is not genkeypair, but instead:
keytool -importkeystore -v -srcstoretype pkcs12 \
-srckeystore custom.p12 -destkeystore custom.jks
Almost home... Tell Tomcat where to find the keystore, cert, and key by adding the following to the server.xml, just above the line that contains "sslProtocol":
maxThreads="???" scheme="https" secre="true"Before you save the file, make sure the stanza you just edited is not commented out by a set of <!-- --> symbols.
keystore="conf/custom.jks" keystorePass="changeit"
clientAuth="false" sslProtocol="TLS" />
Symlink the original key and cert back to Tomcat's conf directory. Restart (reload) Tomcat. Test:
echo | openssl s_client -connect localhost:8443 | \
grep subj
What a mess, but at least it works.
Wednesday, June 02, 2010
Tomcat Load Balancing via AJP Module
I like this config for /etc/httpd/conf.d/proxy_ajp.conf:
<Proxy balancer://cluster-http>In the ProxySet lines we define basic values, the only one of which is interesting is timeout. This determines how long a device has to respond before it considered "down". Next we list the nodes, in this case three.
ProxySet lbmethod=bytraffic nofailover=on
ProxySet stickysession=JSESSIONID timeout=15
BalancerMember http://tomcat1:8080 \
retry=120 loadfactor=1
BalancerMember http://tomcat2:8080 \
retry=120 loadfactor=1
BalancerMember http://tomcat3:8080 \
retry=120 loadfactor=1 lbset=1
</Proxy>
ProxyPass /sample balancer://cluster-http/sample
Oddly, the timeout can be specified cluster-wide, but the retry, which specifies how often we check to see if "down" nodes are up, is listed individually. (It is possible to list timeout individually.) The loadfactor determines the weighting for each node.
The fun value is lbset. This one effectively allows the specification of a hotspare. In the above example, all "0" get hit all the time, and "1" gets no traffic. If all "0" nodes go down, "1" gets traffic.
Ready for sexy? Add this:
<Location /balancer-manager>Now you have an interactive, web based, management screen:
SetHandler balancer-manager
</Location>
ProxyPass /balancer-manager/ !
Monday, May 31, 2010
Setting XMMS As Default Player on F13
# grep Exec /usr/share/applications/xmms.desktopAnd change the -e to -p. This changes XMMS's behavior from enqueue to play. For some reason, someone decided they wanted to double click to add songs to a manually executing playlist-- Every other player (including Windows Media Player!) uses drop and drag to add songs to the playlist.
Exec=xmms -e %F
# sudo vi /usr/share/applications/xmms.desktop
BTW: This is what was originally posted that ended up on a file by file basis rather than globally.
1. Open the Music folder (or any location that has an MP3.)
2. Right click an MP3 file and select Open with Other Application.
3. Find and highlight XMMS.
4. Expand the option to Use a custom command.
5. Add " -p " to the xmms command. (The spaces are important.)
6. Check Remember this application.
7. Click Open.
Thursday, May 27, 2010
OpenSSL: Love At Last
Determine a website's SSL cert expiration date:
echo "" | openssl s_client -connect mail.google.com:443 \
2> /dev/null | openssl x509 -noout -text | \
grep After
Verify a file is a key:
openssl rsa -noout -check -in localhost.xxx
Find a key file that is mislabeled:
for J in `find . -type f`; do echo $J; \
openssl rsa -noout -text -in $J 2> /dev/null | grep Pri; \
done
Verify a file is a certificate:
openssl x509 -noout -in localhost.xxx -enddate
Find a cert file that is mislabeled:
for J in `find . -type f`; do echo $J; \
openssl x509 -noout -enddate -in $J 2> /dev/null; \
done
Verify the key matches the cert:
[ `openssl rsa -noout -modulus -in localhost.key` \(Remember that those are back-tics.)
== `openssl x509 -noout -modulus -in localhost.crt` \
] && echo yes || echo no
View a PKCS12 binary file:
openssl pkcs12 -info -nodes -in localhost.p12
Glorious Peoples Shuttle of Greatness in Space
View Larger Map
Wednesday, May 26, 2010
Happy Fedora 13 Day
* Disk Druid has changed, allowing for safer isolation of disks that should not be formatted. Unfortunately, I had problems getting LVM to work.
* Once again I loaded KDE, and found it beautiful, then promptly did away with it. I just can't stand Konsole-- I've got to have fast cut and paste.
* Looks like Plymouth for ATI Radeon is working, but I'm back to not being able to get the resolution beyond 1024x768.
* NIS still doesn't work out of the box, but I've got to move to Kerberos anyway.
* And NetworkManager... It just keeps getting worse and worse.
* The Grub kernel line is significantly more complicated, because it seems as if it is being ordered to NOT load modules.
I'll reload again tomorrow and we'll see if there are any new applications.
Monday, May 17, 2010
RedHat Tomcat 6 with Web Manager
Starting with my "standard load" which does not include Apache:
# yum install tomcat6 tomcat6-admin-webappsThis will snag a quantity of dependencies, but will install with the web manager broken. Before starting Tomcat we will need to "fix" the web manager. While were at it, lets do some reorganizing:
# ls -l /usr/share/tomcat6/ | awk '{print $8" "$9" "$10}Okay... They tried to organize things, but I've never seen anybody put in /usr/share on a production system. Let's go with /opt:
bin
conf -> /etc/tomcat6
lib -> /usr/share/java/tomcat6
logs -> /var/log/tomcat6
temp -> /var/cache/tomcat6/temp
webapps -> /var/lib/tomcat6/webapps
work -> /var/cache/tomcat6/work
# mkdir /opt; cd /opt; ln -s /usr/share/tomcat6 tomcatTime to fix the manager. Web manager will ask for the user the authenticate, even though not user is allowed, by default.
# ln -s tomcat $(cd /usr/share/doc; ls -d tomcat6-*)
# ls -l | awk '{print $8" "$9" "$10}'
tomcat -> /usr/share/tomcat6
tomcat6-6.0.18 -> tomcat
# cd conf; grep manager tomcat-users.xmlOne of the lines should show the user "tomcat" with the role of "manager". Notice the line is commented. Obviously we un-comment the line to allow a manager. We should now be ready:
# service tomcat6 restartHit the manager at something like:
http://tomcat.example.com:8080/manager/html
Sunday, May 16, 2010
Fedora 10+ Kernel Modesetting (KMS)
I'd been running Fedora 6 or 8 to work with Xen. That project has been finished for several months, so when I installed the new drive, I loaded 11. I found, however, that I could not get the Gnome desktop to run at better than 1024x768. I had run 11 before without problems using the same monitor at 1600x1200-- but that was on "third", who is now running ESX.
I checked the twin out, and the card should have been able to run 1280x1024. I could get system-config-display to specify 1280, but the desktop would always drop to 1024. After investigating the problem, I found the culprit was KMS, or kernel modesetting. (Yes, its one word.)
The idea is that the kernel, who owns all the hardware anyway, will decide the best resolution, and the software will do as it is told. Unfortunately, it works with Intel, plays nice with nVidia, but there are a few issues with ATI. Turns our, third was nVidia, and the twins are ATI.
A feature that is closely tied to KMS is the new boot progress screen called "Plymouth". Without KMS, Plymouth is just a three color progress bar. With KMS, its a blue sun projecting solar flares. For these ATI Radeon machines, no Plymouth. This is because KMS isn't reverse compatible. As a result, Gnome looked to the kernel for the correct resolution, kernel said "don't know", and so the desktop could not be made to exceed 1024.
In the end, the solution ended up being to add a Grub argument:
nomodesetStill no Plymouth, but when Gnome asks the kernel for the correct resolution, the response is "decide yourself". Worked for me. Other possibilities, any one of the following:
vga=795
radeon modeset=0
radeon modeset=1
Good reading:
Plymouth Graphical Boot
How To Enable Graphical Boot with Plymouth
Monday, May 10, 2010
Sudo Read Only All
Here's where life gets strange... The customer didn't mind her looking at the box, they just didn't want her changing anything. The best way to make sure she doesn't change anything is to not give her sudo.
Rock --> You <-- Hard place.
Solution: /usr/bin/less is a read only command so lets just sudo it! Unacceptable, as there is a thirty year old hack that lets you bang out of less to a command line, sayeth information security. Easy enough to fix...
echo "username ALL=NOEXEC: NOPASSWD: /usr/bin/less" >> /etc/sudoersThe NOEXEC: prevents the "bang hack" and allows full system visibility.
Thursday, May 06, 2010
Splitting MPEGs On The Command Line
A couple hours later, everything was assembled, and I transferred my first tape. A problem, though: it was too much effort to get the file to start and end at the right place. I spent some time screwing with some of the worthless video editing software, when I found a couple posts that solved the problem.
And, yes, its a command line solution. Your GUIs are so over rated.
ffmpeg -vcodec mpeg2video -r 29.97 -b 2000k -ab 224kThese settings will take an input file encoded at the cards native settings, and chop off everything before 37 seconds and after 2 hours (plus change). I used Mplayer to get the time values.
-i Cap00.mpg -ss 00:00:37 -t 2:06:30 jurassicPark.mpg
Sunday, May 02, 2010
Norton AV Products Still Suck
I got this pop-up:Bad news. I ran a full scan. Nothing. The message returned. Reboot, update, disconnect from the network, scan, clean. The message returned.
For lack of any other option, I clicked "Get Help". Eventually, I was thrown into a chat session with an "analyst". After some discussion, he determined I was needed to upgrade my software from 3.x to 4.x, which seemed strange. Is he telling me that v3 is known to report bogus infections?He never said "yes, v3 has a bug," but he did say there was no virus, and the upgrade would stop the messages.
Suck.
Saturday, May 01, 2010
Google Earth Browser Plugin
That would be a first person shooter, for the children in the audience... under 30.The cool part is that you can see the automobiles on display inside the Verizon Center and the Wizards on the outside jumbo-tron. The detail is so good, you can make read the hours on the door of the Chipotle's restaurant.
What is strange is how much territory does not exist. Go one block north to Chinatown, and the arch is not there. There are entire blocks that are missing. So this got me thinking: What determines what shows up?
A few hints. The Verizon Center always has a giant movie poster, in the MUDD, its for a "The Heartbreak Kid" that came out in October of 2007. In Stree View, its GI Joe, which came out in August of 2009. This implies that Google Earth does not depend on Street View.
I'm afraid the system may depend on crowd sourcing. It is up to the community to model the buildings. This poses two problems. First: what if someone chooses to model the buildings wrong. Second: if they are expecting me to model my own house, it isn't going to happen.
Not only would it end up looking like an MC Escher print, but I just got too many other things to do. Nothing important mind, you. It not like I've got a life, or anything.
Sunday, April 18, 2010
History of the US Moon Base
This is apparently part of a series designed for High School students to teach the history of space exploration. The top level page includes a history of the shuttle and discussion of a Mars mission.
Thursday, April 15, 2010
Password Change Policies Do Not Enhance Security
Saturday, April 10, 2010
Started Japanese, Ended American
Win.
Wednesday, April 07, 2010
Google Earth Vehicle Shoots Self... Sort Of
View Larger Map
Monday, April 05, 2010
Cherry Blossom Firworks Pictures
Sunday, April 04, 2010
Cherry Blossom Fireworks Fail
The paper said the fireworks would be part of the music festival going on at the Southwest Water Front (A) at 7th and Maine. The crowd assumed it would be at the Tidal Basin, and collected on its shores (B).
The problem with the waterfront, is that it is highly developed, so there are few places to kickback away from the crowd. I decided I would watch from Potomac Park (C) and have a picnic.
To get to the park, I had to foot it in through the crowd from the Smithsonian Metro. I made a wrong turn, and ended up "on the wrong island" (D). The good news was no crowd. The bad news was trees and bridges obstructing the view.
I tried my Sprint PDA's navigation system-- it said I was at the Pentagon Lagoon.
Better luck next year.
Saturday, April 03, 2010
OpenNebula- Red Hat Xen Node
To test Red Hat functionality, I needed to build a Xen node. I could have used CrapOS... I'm sorry that was a type... I meant to say: I could have used CentOS for this test, but we all know how stupid that would be.
What follows is a little black magic used only for testing. These are the steps needed to take an @base install on 5.5 and get Xen running. Since yum sometimes gets confused on this process, it is best done directly off the CD or a mounted image:
rpm -Uvh Server/kernel-xen-[0-9]*.rpmNote: the nodeps argument is to avoid complaints about sound drivers. Since we are building a cloud node, we don't care 'bout no stinkin' sound drivers.
rpm -Uvh Server/bridge-utils-[0-9]*.rpm
rpm -Uvh Server/xen-libs-[0-9]*.rpm
rpm -Uvh --nodeps Server/SDL-[0-9]*.rpm
rpm -Uvh VT/libvirt-[0-9]*.rpm
rpm -Uvh VT/libvirt-python-[0-9]*.rpm
rpm -Uvh VT/python-virtinst-[0-9]*.rpm
rpm -Uvh VT/xen-[0-9]*.rpm
sed -i "s/default=1/default=0/" /boot/grub/grub.conf
reboot
Now, the dependencies for OneNebula:
yum install -y ruby xmlrpcUnfortunately, there is one dependancy missing from the Red Hat disto, so we have to grab it from Fedora. Due to some glibc versioning issues, we go back to F8:
wget http://archive.fedoraproject.org/pub/archive/fedora/linux/updates/8/i386.newkey/xmlrpc-c-1.06.31-2.fc8.i386.rpmAnd lets try Open Nebula:
rpm -Uvh --nodeps xmlrpc-c-1.06.31-2.fc8.i386.rpm
rpm -Uvh one-1.4.0-1.i386.rpm
Preparing... ########## [100%]
1:one ########## [100%]
Wednesday, March 31, 2010
Tuesday, March 30, 2010
OpenNebula Cluster
The thing to understand about OpenNebula, is a cloud environment, not a virtualization platform. This means that we need to choose an OS first. Because of the hardware, I'm using Fedora 8 with Xen. (I prefer Fedora Core 6, since it is more like RHEL 5.2, but it was not stable with OpenNebula. Lesson learned: use F8.)
I did a base kickstart on the nodes, to ensure a slim footprint. I loaded the Xen kernel and libraries, but left off virt-manager to avoid the overhead of an X server. In my cluster, three of the four nodes are identical, but the fourth is more powerful. That will be our head node.
When I tried to load the one-1.4.0 rpm on the head node, I ran into dependencies. (Ah, yes: the download says its for F11, and I'm using F8.) Needed packages:
yum install -y xmlrpc xmlrpc-cThis extracted to the /srv/cloud/one directory.
yum install -y ruby
The RPM created accounts:
/etc/passwd: oneadmin:x:512:903The users's home directory is set to the containerized directory created above. A little simple sysad magic to clean up the account and assign keys to allow oneadmin the SSH to localhost without a password.
/etc/group: cloud:x:903
And here is where it gets beautiful: We now use NFS to export the directory to the nodes. Log in to the nodes, mount the NFS share, and replicate the user accounts. (Note to self: add user and group to NIS.) With the mount in place, return to the head node, become the oneadmin user, and SSH to the node. Since the user's home is the share, and keys are in the share, we get right in.
The infrastructure is in place, now its on to VMs.
Sunday, March 28, 2010
SVN Ignore
I keep forgetting how to tell SVN to ignore a directory. This is a big deal for me, since I have a habit of creating working directories outside of my server's web root, then moving the files once they are validated. It's a security thing, but it means my dev serves always have allot of dead files laying around. Here's how to ignore the working directories.
First, ensure your EDITOR environmental variable is set:
export EDITOR=vi(Add this to your ~/bash_profile if needed.)
From SVN root, execute:
svn propedit svn:ignore ./target/dirThis should toss you into the editor. Add the appropriate bash style wildcards representing file names and types. For my, a simple * (asterisk) usually suffices. Save the file, and run and svn status to verify the results.
Thursday, March 25, 2010
Napoleon
There were other French paintings by Matisse, Monet, Toulouse-Lautrec. In the same collect, a couple by Picasso and Van Gough.