Monday, December 27, 2010

An EC2 Conundrum

Whenever I would lecture on Amazon's EC2, I would point out that Amazon's internal infrastructure is (effectively) EC1, and when they need capacity, it comes from EC2. In the past few weeks, I've seen this first hand. Of course, we need to remember that this is Amazon's peak period, so a resource crunch should be expected as part of the normal patterns of business activity (BPA), but I was caught by surprise in this one respect.

The Amazon cloud, known as Amazon Web Services (AWS), is billed on three meters:
* VM Resources, such as CPU and memory
* Storage, either block volumes or web based files
* Bandwidth, both into and out of the cloud

I have a set of VMs that I launch as needed, so I am not always billed for cycles or bandwidth. When I need the VM, I don't want to have to upload all the supporting applications and data, or go through a complex configuration procedure. The solution was to grab an Elastic Block Store (EBS), which looks to a Linux VM as a disk device. I provision the VM, connect the volume, log in, mount the device, where I have a set of scripts that rebuild the application server in less than 1 minute.

Here's where I got burned: The EBS is actually a LUN on a SAN, which resides in a data center, somewhere in the world. Amazon has four regions: Virginia, California, Ireland, and Singapore. I picked Virginia. But in Virginia, they have four data centers, called availability zones, labeled A, B, C, and D. My volume is in Virgina "B". Unfortunately, they have insufficient capacity in VA-B to launch a VM, as of about 21 December.

This means I've got stuff on a disk, somewhere across the Potomac, that I can't get to, because I don't have a machine to access it. I could launch a VM in VA-C or VA-D, but there is no native mechanism to allow VMs to mount disks that live in another data center. Thus the conundrum: How do we protect against this situation?

The answer is obvious: clustered replication. Two EBS volumes in different data centers, with one VM acting as the master node, and another VM acting as the replication node. Unfortunately, this doubles the cost of the system... From $15 a month to $30 a month. Not really that much... and that assumes my data is critically important, which it isn't.

But you'd think Amazon would have provided a way to prevent this from happening. After all, its not like me paying twice as much on a monthly basis is something they'd actually want to happen.

Saturday, December 25, 2010

Schug Pinit Noir

This is a California (Sonoma Coast) wine, that I ended up with twice during the holidays. The first time was at a company gathering, the second was when I got it as a gift. Its a good wine with a prominent smokey flavor, but not high tannins.

Th company gathering was at a high end restaurant, but it was served a dining room temperature. Once I chilled the bottle I got at home, I was much more satisfied. This is also a wine that truly needs to breath to mellow. Though not expensive, it is at the close to my high end for everyday wine. Out of 10, I'll go a strong 6, pushing 7, with the right preparation.

Sunday, December 19, 2010

A Second Apache Instance With YUM

I ran into a situation where I needed two separate instances of the Apache HTTPD service on the same server. I couldn't simply virtual host the second site, because it needed a radically different configuration from the first instance. My first reaction was that I would use YUM to install the first instance, and then snag the Apache source and compile the second instance. The problem with this was that the two instances would be different versions: the first easily patched and upgraded with YUM and the second being an administrative nightmare of patching and recompiling.

To prevent the overhead of administering the second instance, I started investigating an old RPM option I'd never used: --relocate. It turns out this option was the opposite of what I had expected, in that it moved the first instance, rather than install a second. And besides, using RPM manually was only incrementally better than the original idea.

So what about YUM? There is an option for --installroot=/path. Seems like what I wanted: instead of distributing files based on system root to /etc, /usr, /var, lets put the httpd files in an tree under /opt. What happened when I ran the command surprised me:
yum install httpd --installroot=/opt
<snip>
Transaction Summary
=============================
Install 77 Package(s)
Update 0 Package(s)
Remove 0 Package(s)

Total download size: 64 M
Is this ok [y/N]:
This thing is not going to only install the httpd binaries, libraries, and config files... but every dependency... which already exists on the system! And its going to require 64M of disk space!

Oh, wait... I've got like 250G of free space. Do I really care if it take 64M? No! And so, I answered "Y". What did we end up with?
ls /opt
bin dev home lib64 mnt proc sbin srv tmp var
boot etc lib media opt root selinux sys usr
Ouch! That's ugly. Looks like the better (cleaner, prettier) choice would have been:
yum install httpd --installroot=/opt/httpd-2i
For the sake of simplicity, through the miracle of virtualization, lets just consider that fixed.

Lets see what we got:
/usr/sbin/httpd -v
Server version: Apache/2.2.8 (Unix)
/opt/httpd2i/usr/sbin/httpd -v
Server version: Apache/2.2.8 (Unix)
What about an update? I added a repo file to include the updates directory on the satellite server.
yum update -y httpd
<snip>
/usr/sbin/httpd -v
Server version: Apache/2.2.9 (Unix)
/opt/httpd2i/usr/sbin/httpd -v
Server version: Apache/2.2.8 (Unix)
As expected for the base install, but no love from the second instance.
yum update -y httpd --installroot=/opt/httpd2i/
Setting up Update Process
No Packages marked for Update
Still no good. As a matter of fact, nothing seemed to work. So, as a workaround, I tried this:
yum install httpd -y --installroot=/opt/httpd2i-2/
rsync -Pr httpd2i-2/* httpd2i/ --update
/opt/httpd2i/usr/sbin/httpd -v
Server version: Apache/2.2.9 (Unix)
rm -rf /opt/httpd2i-2
In a nutshell, create a third instance, and copy the third instance over the second, hoping not to overwrite any configuration files in the process.

Does this solve the original problem? Sort of. Is it easier than recompiles? Its faster. Just one more problem... As is, the new Apache does not run. Looks like we need some more hacking. Stay tuned for part 2.

*** Update ***
On second though... I'll just recompile. It turns out there are some references to that path in the RedHat binaries. That's bad form on their part, and they should be ashamed, but by the time I figure out how to hack this, the recompile will be done.

So, no part two. Just snag the binaries and be done with it. That doesn't mean that this feature is useless. It just means that it didn't solve this problem.

Wednesday, December 15, 2010

Evil Free Cell Game: #29868

I finally beat it. Now I can go to bed.

Thursday, December 09, 2010

So, So, Sad: And its ITIL's Fault

I signed up for a series of ITIL classes with the goal of earning the ITIL Expert Certification. (They're choice of words, not mine; I'm leery of "experts", personally.) There are five classes, each with a test. Once you pass all five classes and tests, there is a sixth class and test.

Yesterday, I found out that I failed the fifth test. I was crushed! Not because I failed: I fail all the time. Constantly. As a matter of fact, I failed tests one and two, but those didn't bother me. Let me explain.

I call the testing format Three Little Kittens.

You get a case study.
Three little kittens, have lost their mittens.
You must select "the best" solution, based upon four chioces:
1. They bought gloves
2. They found their mittens
3. And they shall have no pie
4. Kittens don't wear mittens
The operative factor in this process is the fact that we have to pick "the best" solution. The answers are weighted with scores of 5, 3, 1, and 0 points. In the case study above, the answers logically break out as follows.
1. Throw money at the problem
2. A definitive solution
3. Punishment does not solve the problem
4. True, but irrelevant
Thus, the 5 point answer is "2", the 3 point answer is "1", the 1 point answer is "3", and number "4" is worth nothing, even though it is completely accurate.

I failed the first two tests because I did not personally recognize the level of dedication that is needed for the certification track. Furthermore, the class vendor, Global Knowledge, has not done a good job of setting expectations. Embarking on this process requires either significant management and project experience, or the purchase of supplemental material and several weeks of study before the class.

This certification also requires complete support from your employer. They have got to be willing to give you the time and resources to succeed. They have got to recognize the value they will receive from this process.

I have scheduled to retake the class and tests for 1 and 2. After passing tests 3 and 4, I was very confident that I understood the testing method, and the amount of preparation needed before hand. My results for test 5?
50% of answers were 5 pointers
12% of answers were 3 pointers
 0% of answers were 1 pointers
38% of answers were 0 pointers
Fail!

So, whose fault is this? Doesn't matter (see justification 3 above.) But I am sad.

Monday, December 06, 2010

Why Am I Changing Light Bulbs?

Being an ecologically conscious kinda guy, about three years ago, I went through my house and replaced my incandescent bulbs with CF bulbs. It was hugely expensive because I bought good GE brand bulbs. I saw it as an investment. Not only would the bulbs save energy, but they would last a thousand years.

Yes, a thousand years, damn it. They said the bulbs lasted five times longer than "normal" bulbs. They said that even though they cost more, they don't really cost more when you consider the cost of all the bulbs you won't have to buy in the future. And you're saving energy.

Yeah, because they put out half the light of normal bulbs. One bathroom had a two bulb fixture. If you walked into the bathroom in the middle of the night, it took five minutes for the lights to power-up. (I was already "done" by then.) So, I changed one of the CF bulbs for a normal bulb. For short jobs, the incandescent bulb fires up giving us 50% light, and for jobs taking more than 15 minutes, the CF gets us up to 90%. And still saves energy.

But the CF bulb has burned out. Not the normal bulb. The CF that was suppose to last five times longer! What's the deal? It's almost like they lied to me... or something.

Oh now: lets not loose sight of what's important. I'm saving the planet. I'm being environmentally aware. The CF bulbs reduce my carbon footprint. And they contain deadly levels of mercury that is sufficiently toxic that improper disposal is criminal in some jurisdictions.

Now if you'll excuse me, I've got some endangered tigers that need to be shot.

Saturday, December 04, 2010

Oops, I've Seen All of Netflix

Yep. All of it.

Okay, not every movie in the entire Netflix inventory, but every movie that I've ever wanted to see. I'm down to watching foreign flix with subtitles. I'm to the point that I'm saying to myself: "Hey, I don't think that was all that bad. Oh sure, I purposely avoided it when it came out at the theaters since it wan't worth spending money on, and I didn't watch it on HBO or TV since I had better stuff to do, but I'll add it to my Netflix queue."

Why? I because Netflix is a flat rate service. Its a buffet. All you can eat. There was an old Huey Lewis song:
"The sign on the door said all you can eat for $1.99, but one dollars worth was all i that could stand"
(Like there is anything but old Huey Lewis songs.

So here's what I have left:
- The Good, the Bad, the Weird: A subtitled Chinese Western
- How to Train Your Dragon: What passes for a cartoon these days
- Daybreakers: Because the world needed another vampire movie
- Scott Pilgram vs The World: For the 80's video game references

Sometime in the future, they'll have:
- Despicable Me: I dream of world domination
- Inception: Speaking of dreams
- Skyline: Gotta keep an eye on the aliens

I hope somebody invents something Earth shatteringly entertaining... quick.
*Sigh*

Saturday, November 13, 2010

Fedora and Screen Resolution: Pt2

Turns out, the xrandr info is not persistent across reboots. The obvious solution was to add the needed lines to the /etc/rc.local, but xrandr will only execute if the X engine is online. The solution was to create a script and drop into the X window systems initialization sequence:
cat /etc/X11/xinit/xinitrc.d/setres.sh
#!/bin/sh
xrandr --newmode "1280x1024" 108.88 1280 1360 1496 1712 1024 1025 1028 1060 -HSync +Vsync
xrandr --addmode VGA-0 "1280x1024"
xrandr -s "1280x1024"
Notice the scripts path.

Fedora and Screen Resolution

For the last few revisions of Fedora, I've had problems with getting my physical systems to behave at the desired screen resolution. The problem is the way the video drivers attempt to detect them monitor-- it is assumed that the display hardware is an LCD. Unfortunately, some of my physical systems still use CRT's. Why? Because they work.

I don't know exactly when the change occurred, but starting with F13, my system always booted to 1024x768. Back in the day, we'd use system-config-display to set the proper resolution, but it is no longer a default component, and has become unreliable. Instead, we should use the less intuitive xrandr. So, here's what it takes to override the default resolution.

First, determine what xrandr sees. From within an X desktop environment, open a terminal window.
xrandr -q
Screen 0: minimum 320 x 200, current 1024 x 768, maximum 2944 x 1024
VGA-0 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm
  1024x768     60.0*
  800x600       60.3
  640x480       59.9
It knows that Screen 0 (the monitor) is capable of 2944x1024. For some reason, VGA-0 (the video card) is defaulting to 1024x768 as indicated by the asterisk.

Second, we need to configure more video modes. This requires that we feed xrandr a huge amount of information we don't have. Luckily, gtf knows what we need. Let's try to bump the resolution up one notch to 1152x864 with 60Hz refresh rate.
# gtf 1152 864 60
# 1152x864 @ 60.00 Hz (GTF) hsync: 53.70 kHz; pclk: 81.62 MHz
Modeline "1152x864_60.00" 81.62 1152 1216 1336 1520 864 865 868 895 -HSync +Vsync
We need the second line to feed to xrandr:
xrandr --newmode "1152x864_60.00" \
  81.62 1152 1216 1336 1520 864 865 868 895 -HSync +Vsync

xrandr -q
Screen 0: minimum 320 x 200, current 1024 x 768, maximum 2944 x 1024
VGA-0 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm
  1024x768     60.0*
  800x600       60.3
  640x480       59.9
  1152x864_60.00 (0x7e)   81.6MHz
    h: width 1152 start 1216 end 1336 total 1520 skew 0 clock 53.7KHz
    v: height 864 start 865 end 868 total 895 clock 60.0Hz
Our new mode has been staged, now it has to be connected to a video card, and activated:
xrandr --addmode VGA-0 "1152x864_60.00"
xrandr -q
Screen 0: minimum 320 x 200, current 1024 x 768, maximum 2944 x 1024
VGA-0 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm
  1024x768     60.0*
  800x600       60.3
  640x480       59.9
  1152x864_60.00   60.0
xrandr -s "1152x864_60.00"
xrandr -q
Screen 0: minimum 320 x 200, current 1152 x 864, maximum 2944 x 1024
VGA-0 connected 1152x864+0+0 (normal left inverted right x axis y axis) 0mm x 0mm
  1024x768     60.0
  800x600       60.3
  640x480       59.9
  1152x864_60.00   60.0*
If you're lucky, you can still see your video output!

Oddly, it would seem the part in quotes is completely arbitrary, but GNOME's desktop resolution utility needs that field to be in the 1234x567 format. The real headache is trying to determine the available resolutions and scan rates. If you really feel daring you can pump all the possibilities into the list and try them until you find one you like. Here's the list:
for J in `elinks http://bunger.us/rez.xml`; do \
  X=`echo $J | cut -dx -f1`; Y=`echo $J | cut -dx -f2`;\
  for K in 60 75 80 120; do \
    gtf $X $Y $K | grep Modeline; \
done; done
And here's the sledgehammer that populates the list:
for J in `elinks http://bunger.us/rez.xml`; do \
  X=`echo $J | cut -dx -f1`; Y=`echo $J | cut -dx -f2`;\
  for K in 60 75 80 120; do \
    L=`gtf $X $Y $K | grep Modeline | cut -d\  -f4-99`;\
    M=`echo $L | cut -d\  -f1`; \
    xrandr --newmode $L; xrandr --addmode VGA-0 $M;\
done; done

Monday, October 25, 2010

Worst Day Ever!

Was sick all weekend, didn't leave the house. Had to go to a class in downtown DC this morning. This is third in a series that has already gone pretty bad. Got up an hour early, but ended up leaving the house fifteen minutes late. Started the car and:
Just more than an 1/8 tank of gas
Yeah... I can make it... Twenty miles to the Metro station. I'll get gas on the way home.

Got to the Metro station okay.
But I'd forgotten my Smartrip automated train ticket!
Can't very well drive thirty minutes home and thirty minute back to get it, I'll have to buy another. All the Smartrip kiosks are down. Only paper tickets available.

No big deal. I can buy a paper ticket. One problem... You can't pay for parking with a paper ticket, only with the automated Smartrip debit card. No cash... Only the single purpose debit card. And, no, I don't know why you can't pay for parking with a paper ticket.

Okay. I buy a day pass for $9.00 and will get a Smartrip card when I get to downtown. Turnstile won't accept my paper ticket. Everybody else is getting through? I walk toward one of the attendants, but don't even get a chance to open my mouth. She says:
9:30
Oh yeah, that's right... Day passes are only good 9:30AM to midnight. So I buy a $10 round trip ticket. And get on the train. And ride to downtown. And get off the train. And look for a kiosk to buy the Smartrip card. And it's down.

No time to worry, got to walk to class. What was the address? It's in my smart phone, in my case...
Crap, left my phone at home.
I've been to the building before, just look for landmarks. There it is. Made it on time.

But the vendor doesn't have custom curriculum for this particular class. I did bring my copy of the manual from the British Office of Government and Commerce, but the material they are using is from the Netherlands. It was translated from the Queen's English, to Dutch, then back to English. And the instructor is Indian. So much for all that time I spent learning German. And Italian.

So at lunch, I use the day pass to ride to Metro Center and buy a Smartrip preloaded with $5 to pay my parking fee of $4.75. And I didn't run out of gas.
So I guess the day wasn't that bad.
And sorry for the plagiarism, Jeff.

Wednesday, October 20, 2010

Remove XML Comments with sed

This should work for HTML also:
sed '/<!--/,/-->/d' /target/file.xml
Use sed -i save the changes back into the file.

Sunday, October 10, 2010

Richard St. John's 8 Secrets Of Success


Of late, I've been trying to absorb some soft skills and have found the TED initiative to be quite entertaining. Here's one the hit home, because it was the most concise definition of successful habits and behaviors I've come across for free.

As a side note, we may want to debate whether or not I'm a valid judge of successful habits, as I am neither rich nor famous, and as not I am not successful. Well, I assure you I am quite satisfied with my anonymity. As for the money... I have complete trust in Wall Street, Congress, and Social Security.

Oh... My paraphrased, bottom line, for those of you with an attention span of less than three minutes:
Focus on the passionate and persistent pursuit of ideas that will allow you to push yourself to do work you believe will serve people for good.

Il Cortigiano Prosecco

I should probably get in trouble for this, but I enjoyed the wine more the second night since it was less bubbly. I'd put this one up against any Champagne for quality and presentation, but I'm sorry to say... I just find too many bubbles to be a distraction. As such, I have now learned that my experience with Italian sparkling wines was with frizzante rather than spumante. This is effectively the difference between lightly sparkling wine and full bore, champagne caliber beverages.

That explains it; and so we learn. Great price, 6 of 10.

Monday, September 20, 2010

The 10 best IT certifications: 2010

I stumbled upon a three week old article at TechRepublic The 10 best IT certifications: 2010. What a load of crap. The "article" was written by some fool named Erik Eckel, who even states in the article that its a load of bull:
There’s no double-blind statistically valid data analysis run through a Bayesian probability calculus formula here. I’ve worked in IT long enough, however, and with enough different SMBs, to know what skills we need when the firm I co-own hires engineers and sends technicians onsite to deploy new systems or troubleshoot issues
In other words: I made all this up, based on my personal opinions.

And what qualifies me to question his greatness? After all, he:
...is president of two privately held technology consulting companies. He previously served as executive editor at TechRepublic.
Me? I'm a nobody.

Well... I maybe a nobody, but at least I'm smart enough to do enough basic research before attaching my name to an article. News flash: there is no such certification as the RHCP. Its RHCE, you ass-clown.

Have you guessed that his article talked down the value of the RHCE, yet? He also questioned the worth of VMware and ITIL. What is his logic?
Microsoft owns the market.
In other words, this guy one of those people that walks in, sells you a bunch of Microsoft crap, then walks away midway through the project, leaving those of us that have actually stayed current with technology, to come in a clean up his mess.

Well, TechRepublic just made my proxy server's blacklist.

Thursday, September 16, 2010

Perl Taint Mode Regex

I think I finally have a handle on Perl's taint mode as a result of a couple scripts I've been working with. I stumbled upon taint mode after reading an article that said that most web based exploits are the result of programmers (or developers, as the kids like to say) fail to validate input. What taint does is to cause the script to fail, if inputs are not validated. To invoke taint mode modify the shebang to read:
#!/usr/bin/perl -T
Now input's must be laundered:
$validate=$form{"code"};
if ($validate =~ /^(\w*)(3|5|7)(\d{3})$/){
$validate=$1.$2.$3; }else{
die "can not validate"; }
The first line reads the input, but the input is untrusted. In the conditional, we compare the variable against a pre-defined, expected format. If the input matches the format, the variable is set back to an value. If the validation fails, the script befalls a brutal and senseless death... Which is better than being compromised or exploited.

The trick to this process is understanding how to format the Regex and understanding how it is laundered. First, the format pattern is not regex. Sure, all the docs say it is... but its not. So, here's what you need to know:
the format sits between / /
the ^ and $ are anchors, as regex
the ( ) encloses checks
the checks are numbered
the first check is $1, second is $2, etc
if there is an | in a check, its and "or"
the * is a wildcard count
but {3} says exactly 3 characters
Let's look at the example above:
^(\w*)(3|5|7)(\d{3})$
Start at the beginning, get an infinite number of \w characters and assign them to $1. Look for a 3 or a 5 or 7 and assign it to $2 (second set of parens, thus second check.) The last check ensures that the last three characters are digits. Remember that regex is "greedy", so effectively, this expression is evalutated backwards.

Now that you validated the input against the pattern, reassign the checks ($1.$2.$3) back to the variable. This nukes whatever badness the evil doer tried to impose on you. Do this for every variable you read in, and then destroy the input array, to ensure that no lazy developer slides a new form value into the script without validating.

To test match patterns from Bash, try this one-liner:
perl -e 'if("test01" =~ /^(\w{4}\d{2})$/ ){ \
  print "+ $1.$2.$3.$4.$5"}else{ \
  print "- $1.$2.$3.$4.$5"}'; echo
Simple, huh? Let's do an e-mail address:
perl -e \
'if( "xxx\@yyy.us" =~ /^(\w{1}[\w\-\.\_]+\w\@)(\w{1}[\w\-\.\_]+\.)(us|com|net)$/ ){\
  print "+ $1.$2.$3.$4.$5"}else{
  print "- $1.$2.$3.$4.$5"}'; echo
Ouch! (BTW: Bash made me escape the @ symbol.)

For a break down on all the pattern matches check out Steve Litt's Perls of Wisdom.

Monday, September 06, 2010

Apache Modules for Basic Autehtication

I think I've identified the minimum modules for Apache basic authentication:
auth_basic
authn_file
authz_user
authz_default
You'll also need authz_host, but that's probably already in place to support Allow/Deny.

Thursday, September 02, 2010

Elegant Log Compression

Here's an elegant little one liner that I didn't expect to work. I partition pushing 99% space used, the largest culprit being daily log files. Hey weren't mine so I couldn't delete them, but I could compress them. But what about next month?

How about this:
cd /some/path/logs
for J in `ls *log?$(date +%Y)-*(expr $(date +%m) - 1)-*`; do
  ls -lh $J; tar -czf $J.tgz $J; ls -lh $J.tgz; mv $J /dev/shm;
done
The beauty of this is the embedded execution statements.

Within the backticks, are a pair of executions, one of which nests an execution.

This particular incarnation compresses last months logs. It shows the original size and the compressed size, then moves the file to a holding directory. On a real machine, I'd probably change the middle line to:
tar -czf $J.tgz $J; rm -f $J
Pop this in a cronjob and run it at "1 2 3 * *".

Sunday, August 22, 2010

every: command not found

This was an interesting puzzle. I was looking at a hardening document for Linux that identified a huge number of files that needed more restrictive file permissions. Among them were /root/.bashc, /root/.bash_profile, /root/.bash_logout, and so on. It made sense-- nobody else needs to read them, yet they were 644 instead of 600.

But the doc pointed out that the user "root" might not use the bash shell. What if he was a psychopath and used csh? In that case you'd have to look for a bunch of .c* files. I immediately realized the best thing to do was to wildcard this task:
chmod 600 /root/.*
And then I moved on.

But within a few minutes... something was wrong. In another window, as user "doug", I tried to list a directory.
-bash: ls: command not found
What? I couldn't even list my home directory. As a matter of fact, I couldn't execute any command.

This is where your heart kind of skips a beat. As root I could list anything, including /bin/ls. So as root, I tried to switch to user doug:
su: warning: cannot changeto directory /home/doug:
  Permission denied
su: /bin/bash: Permission denied
Oh crap!

Eventually it occurred to me. Consider this situation:
ls -a /root
.   ..   .bashrc   .bash_logout   .bash_profile
When I used the dot-splat wild card, it must have picked up dot-dot, which would be the root directory. From there it probably reset the permissions on /bin, /etc, and so on. I just needed to reset those perms to 755.

No luck. As a matter of fact, every directory seemed to be correct. I could not see any permission that was wrong. And then... in a stroke of unparalleled genius, I tried something else. I looked at the only set of directory permissions you can never see:
# ls -ld /
drw------- 24 root root 4096 Aug 22 22:51 /
What about:
# chmod 755 /
# ls -ld /
drwxr-xr-x 24 root root 4096 Aug 22 22:51 /
And now everything works.

Whew.

In the end, however, I do have to admit one thing: The system was significantly hardened. Hard as a brick!

Installing OpenSSH from Source

I've got a cluster of Fedora 9 machines that run Apache web servers. Since they run Apache 2.2, there is no reason to upgrade the OS. Unfortunately, I stumbled on a bug with Fedora 9's implementation of key authentication with OpenSSH. It was apparently fixed in F10, but as is one of the inherent dangers of Fedora not patched in F9. Rather than kluge around with patching the RPM, I decided to install OpenSSH from source.

When you visit the OpenSSH website you want to get the portable source. I downloaded that onto the box and extracted it to /usr/local to create the openssh-5.1p1 sub. Normally the README file explains the compile sequence but in this case I had to get the instructions from the FAQ.

The first few attempts failed until I installed zlib-devel and openssl-devel. Then it was a simple case of the standard:
./configure
make; make install
This placed the binary in /usr/local/sbin, but messed up the etc structure.

All the config files were in etc and not a sub, so I created /usr/local/etc/openssh and moved all the files into the sub. This required an update to the sshd_config, however. I had to edit he HostKey parameters to include the sub in the path.

To test, we execute:
/usr/local/sbin/sshd -Dd \
  -f /usr/local/etc/openssh/sshd_config
Connect from remote. Test the keys. Bug gone. All good.

Now to symlink everything
cd /etc/
mv ssh ssh-redhat
ln -s /usr/local/etc/openssh ssh-openssh
ln-s ssh-openssh ssh
ls -ld ssh*
cd /etc/init.d
cp sshd sshd-openssh
mv sshd sshd-redhat
ln -s sshd-openssh sshd
This gives us a SysV startup script that points to the correct config files, but the wrong binaries. We need to change all the /usr/ entries to /usr/local/:
sed -i "s~/usr/~/usr/local/~" sshd-openssh
(There's actually only two lines, and the first shouldn't count.)

Oddly, on first try, it fails. The reason is that RedHat built the SysV script to check for the path of the config, but didn't provide the path. This means it fails and uses the default. Since we moved the config... it fails. The solution, which makes everything portable is the put the config path where RedHat expects it:
echo 'OPTIONS="-f /etc/ssh/sshd_config" ' > /etc/sysconfig/sshd
Optionally, recompile with the --sysconfdir=/etc/ssh such that both binaries point to the same sub.

One downside is that the binary is running unconfined by SELinux. If you're really ambitious:
chcon -t sshd_exec_t /usr/local/sbin/sshd
chcon -u system_u /etc/init.d/sshd*
chcon -t initrc_exec_t /etc/init.d/sshd*
Restart the service to confine.

Saturday, August 21, 2010

Casa Santosola Barbera D'asti


I went shopping for a Barolo, but couldn't find one in my price range. This was in the Piedmont section, so I gave it a try. It was a very good wine, but... it was not sufficiently different from a so many other Italian reds. When I looked it up on the chart, I found that the grape, the Barbera, is right next to Sangiovese.

The product was good, the price was good, but this one just did not stand out. 6 of 10.

Kim Crawford Sauvignon Blanc


I've recently seen several adds for Kim Crawford's wines, and saw a few positive reviews, so I went about $5 out of the budget and grabbed this Sauvignon Blanc. A few points: Kim also has chardonnay, but go with the sauvignon, since the vineyards are in Marlborough, New Zealand. And as we know, if your doing New Zealand, your doing screw top.

The verdict? You know how snooty wine reviews talk about "hints of pear"? This one takes aroma of pair and smacks you up side the head with it. There is no doubt about the pair flavor.

Unfortunately, I not real big on fruity wines. Decant this one and let it breathe. (That way no one else sees the screw cap.)

6 of 10

Valley of the Moon Chardonnay


After reading an interesting article about California versus European wines, I decided I give a few a try. The gist of the article was the thought that a 90 point wine was a 90 point wine regardless of its point of origin. This is to say that Californians are judged by the same standards as Europeans. As a result, if a California wine is highly rated, it should meet the same standards as its European counterparts.

I've always found Californians to be too flamboyant. The wines, that is. The people that run the wineries in California are always doing stuff to the wine to make it exciting. They want it to be memorable, but usually end up making it just plain bad. Europeans don't do that-- they let wine be itself, and enjoy it for what it is meant to be.

But this one was a 90 point chardonnay, and it was in my price range. And it was wine. Just wine. Not a bunch of pretentious flavoring to enhance the wine drinking experience. Just a good glass of wine. Not great... which was disappointing for a 90 pointer. If it had been unrated, it would have been a 7.

I'm going to say its a 6 of 10

Thursday, August 05, 2010

Bruised, But Better

I went a specialist at Union Memorial Hospital Sports Medicine center in Baltimore. (I know I talk down Baltimore, but this place is way better than anything in DC.) They took off the ER dressings and fit me with a Robocop boot, rather than a cast. Here's the bruising after 48 hours.


It was nearly black at the doctor's office, but moving around has helped circulated the blood and get it to a nice purple.

The cover story is that I fell down the stairs. I didn't figure the insurance would pay if they knew how this really happened. And if I admit what really happened, I'd have to explain why an almost 50, out of shape, computer nerd decided to take up street fighting and kickboxing as a hobby.

Monday, August 02, 2010

Breaking Your Foot is No Fun

It seems that breaking the fifth meta-tarsal of your foot is so common, it has its own name: Jones Fracture. I don't know who Jones is, but I'm glad to have a Jones fracture and not a Johnson Fracture!

No drugs... and they made me walk out of the ER. You bastards!

Now if you'll excuse me, I'm off to update my Facebook status and tweet the news.

Yeah, right.

Tuesday, July 27, 2010

Who In The World Isn't On Facebook

All too often CNN lacks real news to report, so they make stuff up. As a classic example, they ran a story entitled Who In The World Isn't On Facebook, the first line of which reads:
Seriously ... at this point, who's not on Facebook?
Seriously: That was the lead line.

Did you hear that? That was the sound of Edward R. Murrow coughing up a lung in disgust at the state of what is now called journalism. (And don't even get me started on Fox!)

The article reports that "Facebook CEO Mark Zuckerberg announced that the site hit a half-billion active users" which is a total lie. Did they not notice his pants on fire? Half a billion active users? NFW. Half a billion accounts, 30% of which haven't logged in a year, 20% of which are fake profiles used by thieves, and 10% of which are husbands claiming to be single. That leaves maybe 200 million, and that's being generous.

I though about my friends--
  technical people, hackers, nerds: not on Facebook
  professionals contacts: not on Facebook
  the six siblings I acknowledge: only one on Facebook
  mother or father: not on Facebook
  step-mother or step-father: he's on Facebook
    (he friended my brother, never used the account again)
  kids: on, one has four posts since 2009, the other six

So, in the end. Maybe 3% to 5% of all the people I know are on Facebook. As for CNN? I think they've just let the 995,554 people that like their page go to their head.

Saturday, July 24, 2010

Thanks for Visiting: Script Kiddy

Everybody knows that hackers fall into two categories: script kiddies and Chinese cyber warriors raised from birth to destroy the American power grid.

Well, I've got this little VM floating around the clouds of the internet. Nothing exciting. It hosts http://dougbunger.com, which is mostly 404 pages and dead links. But... its my little cloud VM, and I love it.

So all week long, somebody has been slamming my server, trying to hack in. Why? There's nothing of value. Not quite true: chances are, if they were to compromise my server, they would probably use it as a file drop for pirated media or pr0n. (And not the good kind... of either.)

I don't think its the Chinese: they are too busy hacking Google to read their dissident's email. No, its the Script Kiddies. How do I know? They are hitting the server with thousands of PHP and SQL exploits. Unfortunately, the server has neither. So, I implemented an Apache redirect:
AliasMatch ^$ /var/www/html/index.html
RedirectMatch (.*[pP]+[hH]+[pP]+.*) \
    http://english.cpc.people.com.cn
RedirectMatch (.*[sS]+[qQ]+[lL]+.*) \
    http://english.cpc.people.com.cn
I inserted two lines that evaluate the URL and redirect anyone that ask for anything containing PHP or SQL to another website. My regex was not sufficiently righteous, and redirected blank URI's, so the first line ensures you get an index page.

And where does something like http://vypress.bunger.us/sql.php redirect? Why to the Chinese Communist Party home page, of course. Their people are trained for this kind of thing. I'm sure they will appreciate the practice.

Wednesday, July 21, 2010

I'll Take One Electric Sikorsky, To Go

Yes, I do need my own electric helicopter to fly to the grocery store. If it can hold a 30 minute charge, I'm cashing out my IRAs.

(Assuming my IRAs ever get anymore valuable then a happy meal.)

Sunday, July 11, 2010

Moinet Prosecco

I decided to broaden me horizons on Prosecco by expanding me price range. I spent about $18 on this bottle of Moinet-- pronounced mwaanay. It was more effervescent than less expensive brands, and held its fizz over night in the refrigerator; but that's the trait of a good sparkling wine.

This would be good event wine, but is a little too bubbly for everyday use. On my scale, it gets a high 7, because of price. If price is no object, an 8 for sure.

Monday, July 05, 2010

Witness to a Moment of Innovation

Not of Earth shattering importance (like shattering the Earth would be a good thing... or even important, since we'd all be eradicated) but something happened on Saturday that could be an interesting trend. Remember back in the 90's when every few days, you got an AOL CD in the mail? Remember how they were all completely worthless? Well, Saturday, I got a DVD in the mail.

Again, not interesting, since I get Netflix (et al) DVD's in the mail a couple times a week. This was for a new TNT series called Rizzoli And Isles. As a promotional gimmick, TNT sent the pilot episode on DVD for preview of the July 12th debut.

Imagine if we started getting DVD's in the mail as often as we use to get AOL CD's. Unfortunately, once the trend catches on, most of the DVD's will be crap... Just like AOL.

Saturday, July 03, 2010

Browser Based SSH via Webshell

Lets say you need to SSH into your server, but you're not at your regular workstation. I've always recommended people carry a USB thumb drive with a toolkit of programs, such as Putty. But what if the machine you have doesn't have a USB port. No problem, you can download Putty. But what if the machine you have is a kiosk terminal that doesn't allow you to download...

Yeah, I'll admit it sound's pretty far fetched, but I have found an ultra cool package that could provide exactly such an emergency functionality: Webshell 0.9.6 It runs as a local python service and allows login via an AJAX enabled browser.

Behind the scenes, the browser client communicates with the python service, and the python service acts as an SSH client to access the local SSH service. On the surface, this could be a problem, as the browser to python connection would normally be unencrypted. This issue can be mitigated by install OpenSSL support for python. Unfortunately, the pOpenSSL package wasn't in my Fedora repo, so I had to grab it from Pbone.

I made a couple tweaks to my install. I changed the port from the default 8022:
sed -i "s/8022/???/g" webshell.py
And since we always change the SSH port of outside servers:
sed -i "s/in +' loc/in +' -p ???? loc/" webshell.py
And added some headspace to the top of the page:
sed -i "s/margin:0;/margin:25px 0px 0px 0px;/" \
  www/webshell.css
And changed the font from 10 to 12:
sed -i "s/font:10/font:12/g" www/webshell.css

Once you change the font size, you'll need to change the default background or remove the JPG for solid black.

The documentation is a little unclear on the fact that the program, by default, only listens on 127.0.0.1, so you have to launch the script with -i 0.0.0.0 to accept outside connections. Of course, you'll have to build your own SysV start script.

A side note, there are websites that run this program as a free service to let you web into their server, then hop over to yours. You probably don't want to use those free services. Sure, its SSL from you to them, and SSH from them to your server, but what's the protocol that encrypts the link between the SSL and SSH? can you say none?

Archiving Solaris... Forever!

I found a piece of paper with some Solaris notes. The paper is going into the trash, but the notes are going to the internet to be archived for the good of humanity. Some of these notes may be archived elsewhere on the blog.

To setup your user environment, add to ~/.profile
export PS1="\w #"
export PAGER=less
export TERM=ansi
alias vi='vi +"set showmode ignorecase" '
export EDITOR=vi
Man... I hope I never have to support Solaris again.

Wednesday, June 23, 2010

Compiling Apache Without Default Modules

I have always liked the fact that RedHat and Fedora's Apache httpd RPM is compiled as a fully modular server. Yeah, you loose a couple performance points, but you have a slim footprint which allows more sessions, and there aren't unneeded subroutines waiting to be exploited. Yet, if you download Apache source and try to compile, you get 24 components added to your binary. Bloat!

To compile a slim, modular Apache, use:
./configure --enable-mods-shared=all --with-mpm=prefork \
  --disable-deflate
make; make install

/usr/local/apache2/bin/httpd -l
Compiled in modules:
  core.c
  prefork.c
  http_core.c
  mod_so.c
But this raises an interesting question-- what if we actually want to statically compile some, but not all, modules? Maybe we want a dedicated proxy/balancer:
./configure --enable-mods-shared=all --with-mpm=prefork \
  --disable-deflate --enable-proxy=static \
  --enable-proxy-ajp=static --enable-proxy-balancer=static

make; make install
/usr/local/apache2/bin/httpd -l
Compiled in modules:
  core.c
  mod_proxy.c
  mod_proxy_ajp.c
  mod_proxy_balancer.c
  prefork.c
  http_core.c
  mod_so.c
And that's what we are looking for. We'll need to get SSL on this puppy, but it's bed time, so go to sleep.

Friday, June 18, 2010

Off Peak Energy Usage

I may have complained about this before, but since nobody has fixed this yet, I'll complain again. One of my favorite tech news sites was running a story about "smart houses" and commented on energy can cost 10 times more during peak hours. The appliances in the smart house "will be able to automatically delay its actions until off-peak hours."

You know what... I don't care! It doesn't save me any money.

I get charged a flat rate, nights, days, weekends. How is my washing machine choosing to wash my clothes later because it costs less helping me? It's not! Who is it helping? The global ecosystem? BRRRRAP! Wrong answer, you naive twit-- the power company's profit margin. That's it, nobody else.

Cost savings are not passed to the consumer, they pad the Wall Street coffers. If the power companies had invested in their infrastructures over the past four decades, we'd already have a smart grid. But noooooooo. They pocketed the profits and have left the consumers to deal with their short-sightedness.

So, when I'm ready to was my clothes, I'm doing it. If I overload the grid, and brown you out, too bad. You should have installed a point of use energy system... or at least a UPS. Like me.

Wednesday, June 16, 2010

Another Tomcat Post :: SSL (Part 2)

Oh no-- Not more Tomcat SSL! Yes. But. In the immortal words of Bullwinkle J. Moose: "This time for sure!"

This entry is a follow up to a post a few days ago quaintly titled Another Tomcat Post :: SSL. Since that post, I have made a momentous discovery regarding Tomcat encryption.

There is an annoying error message that writes to catalina.out on Tomcat restart, that it turns out is relevant to why this SSL has been such a mess.
Jun 16, 2010 8:55:14 PM org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path:
This is telling us that we are not using packages optimized for Tomcat. The message can be cleared by installing the tomcat-native RPM:
yum install -y tomcat-native
On restart, the irritating little message is gone.

One of the things that this package does is to include an updated crypto stack for Tomcat that includes x509. Glory to the mighty Gods of Olympus! Once installed, we modify the SSL stanza in the server.xml file:
maxThreads="150" scheme="https" secure="true"
SSLCertificateFile="conf/custom.crt"
SSLCertificateKeyFile="conf/custom.key"
clientAuth="false" sslProtocol="TLS" />
We've removed the keystore directives and used commands to point to our x509 cert and key. Restart Tomcat.

To test:
echo | openssl s_client -connect localhost:8443 | \
grep subj
If you are not using RPMs, but Apache's Tomcat release, look in CATALINA HOME's bin directory for a tar file.

Sunday, June 06, 2010

XenServer System Alerts From the Future

I got the system alert on my XenServer this weekend. Its telling me that on the 16th there was a set of updates released. Except, its the 6th. So this system alert hasn't happened yet. Or mamybe they are going to release the updates on the 16th. Nope, the updates are there.

I'm soooooo confused.

Thursday, June 03, 2010

Another Tomcat Post :: SSL

Yeah, I'm about tired of Tomcat, too. But this is new and improved Tomcat: Now with OpenSSL. And we know how much I like OpenSSL.

Okay, simple stuff, first. When you install the mod_ssl RPM, it creates a dummy cert. Lets nuke it and create our own:
cd /etc/pki/tls
mv private/localhost.key private/localhost.key.rpm
mv certs/localhost.crt certs/localhost.crt.rpm
openssl genrsa 2048 -out custom.key
openssl req -new -nodes -subj /O=doug \
  -key custom.key -out custom.csr
And the CSR get sent to the non-existant CA... So, fudge it:
openssl x509 -noout -text -signkey custom.key \
  -in custom.csr -out custom.pem
Distribute:
cd private; mv ./custom.key .
ln -s custom.key private.key; cd ..
cd certs; mv ../custom.{csr,pem} .
ln -s custom.pem custom.crt
ln -s custom.pem localhost.crt
service httpd reload
And, yes, memorize *all* that crap.

Test httpd:
echo | openssl s_client -connect localhost:443 | \
  grep subj

Getting Tomcat to work with SSL reminds me of the chorus from an "Offspring" song, Stuff Is Messed Up.

Tomcat requires our cert be converted from x509 to pkcs12. This is not difficult, but there are two critically important issues with the following command. The assigned name must be 100% unique across all files in the working directory. As such, make sure you do this next section in an empty directory.

The second issue is that you will be prompted for a password. It must be more than six characters, even though it will accept smaller, including NULL. Would it surprise you to hear that ultimately your password is going to be coded on the system in clear text? The default clear text password is "changeit".
openssl pkcs12 -export -name unique \
  -in /etc/pki/tls/certs/custom.crt \
  -inkey /etc/pki/tls/private/custom.key \
  -out custom.p12
Ready for another puzzle? Tomcat needs another component called a keystore. Beware: This command assumes your goal is to compile all the pkcs12 files in the working directory. Wait-- Don't assume I said something that I didn't: the command does not source *.p12, it evaluates all the files in the directory and if a file is a pkcs12 file, it compiles it. That's why we're in a nearly empty directory.

And remember the password we entered a moment ago? The keystore's password must match. Oh... and the Tomcat SSL documentation is wrong.
keytool -genkeypair -keystore custom.jks \
  -alias unique -dname O=doug
Notice that there is no -in and what was a pkcs12 name is now the jks alias.

*** Updated 6/16/2010 ***
I have since learned the steps above do not work as I thought. The correct next step is not genkeypair, but instead:
keytool -importkeystore -v -srcstoretype pkcs12 \
  -srckeystore custom.p12 -destkeystore custom.jks

Almost home... Tell Tomcat where to find the keystore, cert, and key by adding the following to the server.xml, just above the line that contains "sslProtocol":
maxThreads="???" scheme="https" secre="true"
keystore="conf/custom.jks" keystorePass="changeit"
clientAuth="false" sslProtocol="TLS" />
Before you save the file, make sure the stanza you just edited is not commented out by a set of <!-- --> symbols.

Symlink the original key and cert back to Tomcat's conf directory. Restart (reload) Tomcat. Test:
echo | openssl s_client -connect localhost:8443 | \
  grep subj

What a mess, but at least it works.

Wednesday, June 02, 2010

Tomcat Load Balancing via AJP Module

I've been playing with Tomcat in my spare time and have had some fun with the load balancing module, which happens to be implemented with the help of our friend Apache HTTPD. The basic premise is that we point our traffic at an HTTPD instance and he disperses the traffic to a set of worker nodes. This can be done via the standard http protocol or the Tomcat ajp protocol.

I like this config for /etc/httpd/conf.d/proxy_ajp.conf:
<Proxy balancer://cluster-http>
ProxySet lbmethod=bytraffic nofailover=on
ProxySet stickysession=JSESSIONID timeout=15
BalancerMember http://tomcat1:8080 \
    retry=120 loadfactor=1
BalancerMember http://tomcat2:8080 \
    retry=120 loadfactor=1
BalancerMember http://tomcat3:8080 \
    retry=120 loadfactor=1 lbset=1
</Proxy>
ProxyPass /sample balancer://cluster-http/sample
In the ProxySet lines we define basic values, the only one of which is interesting is timeout. This determines how long a device has to respond before it considered "down". Next we list the nodes, in this case three.

Oddly, the timeout can be specified cluster-wide, but the retry, which specifies how often we check to see if "down" nodes are up, is listed individually. (It is possible to list timeout individually.) The loadfactor determines the weighting for each node.

The fun value is lbset. This one effectively allows the specification of a hotspare. In the above example, all "0" get hit all the time, and "1" gets no traffic. If all "0" nodes go down, "1" gets traffic.

Ready for sexy? Add this:
<Location /balancer-manager>
SetHandler balancer-manager
</Location>
ProxyPass /balancer-manager/ !
Now you have an interactive, web based, management screen:

Monday, May 31, 2010

Setting XMMS As Default Player on F13

My home workstation has a dual head Vista laptop and a Linux desktop. The desktop has better speakers, so I'm using it as my office jukebox. I've always used XMMS, but ran into a small problem getting it set as the default player on F13. What I wanted, was to be able to double click an MP3 and have it start playing. What I got, was that the song would queue, but not play. Here's what I had to do to fix it:
# grep Exec /usr/share/applications/xmms.desktop
Exec=xmms -e %F
# sudo vi /usr/share/applications/xmms.desktop
And change the -e to -p. This changes XMMS's behavior from enqueue to play. For some reason, someone decided they wanted to double click to add songs to a manually executing playlist-- Every other player (including Windows Media Player!) uses drop and drag to add songs to the playlist.

BTW: This is what was originally posted that ended up on a file by file basis rather than globally.

1. Open the Music folder (or any location that has an MP3.)
2. Right click an MP3 file and select Open with Other Application.
3. Find and highlight XMMS.
4. Expand the option to Use a custom command.
5. Add " -p " to the xmms command. (The spaces are important.)
6. Check Remember this application.
7. Click Open.

Thursday, May 27, 2010

OpenSSL: Love At Last

No... Not even close. It is so counter-intuitive, needlessly complicated, and maddeningly confusing. Thus forcing me to cheat.

Determine a website's SSL cert expiration date:
echo "" | openssl s_client -connect mail.google.com:443 \
  2> /dev/null | openssl x509 -noout -text | \
  grep After

Verify a file is a key:
openssl rsa -noout -check -in localhost.xxx

Find a key file that is mislabeled:
for J in `find . -type f`; do echo $J; \
  openssl rsa -noout -text -in $J 2> /dev/null | grep Pri; \
done

Verify a file is a certificate:
openssl x509 -noout -in localhost.xxx -enddate

Find a cert file that is mislabeled:
for J in `find . -type f`; do echo $J; \
  openssl x509 -noout -enddate -in $J 2> /dev/null; \
done

Verify the key matches the cert:
[ `openssl rsa -noout -modulus -in localhost.key` \
  == `openssl x509 -noout -modulus -in localhost.crt` \
] && echo yes || echo no
(Remember that those are back-tics.)

View a PKCS12 binary file:
openssl pkcs12 -info -nodes -in localhost.p12

Glorious Peoples Shuttle of Greatness in Space

Behold, comrades: the Soviet space shuttle Buran. After its single mission, I'd heard this pentacle of communist engineering had been retired, but look what I found (with some help) on Google maps. No street view... too sad.



View Larger Map

Wednesday, May 26, 2010

Happy Fedora 13 Day

In keeping with my standing policy, I've skipped a version, and jumped from Fedora 11 to 13. I was quite disappointed, however, that Xen Dom0 is not included.

* Disk Druid has changed, allowing for safer isolation of disks that should not be formatted. Unfortunately, I had problems getting LVM to work.
* Once again I loaded KDE, and found it beautiful, then promptly did away with it. I just can't stand Konsole-- I've got to have fast cut and paste.
* Looks like Plymouth for ATI Radeon is working, but I'm back to not being able to get the resolution beyond 1024x768.
* NIS still doesn't work out of the box, but I've got to move to Kerberos anyway.
* And NetworkManager... It just keeps getting worse and worse.
* The Grub kernel line is significantly more complicated, because it seems as if it is being ordered to NOT load modules.

I'll reload again tomorrow and we'll see if there are any new applications.

Monday, May 17, 2010

RedHat Tomcat 6 with Web Manager

The Red Hat Tomcat 6 RPM is not behaving politely. It would seem the manager should be available after install, but I'm getting a blank page. Turns out that I've had life pretty easy thus far (news to me) and someone has already done the hard work. Here's what it took for me to get Tomcat 6 working via RPM.

Starting with my "standard load" which does not include Apache:
# yum install tomcat6 tomcat6-admin-webapps
This will snag a quantity of dependencies, but will install with the web manager broken. Before starting Tomcat we will need to "fix" the web manager. While were at it, lets do some reorganizing:
# ls -l /usr/share/tomcat6/ | awk '{print $8" "$9" "$10}
bin
conf -> /etc/tomcat6
lib -> /usr/share/java/tomcat6
logs -> /var/log/tomcat6
temp -> /var/cache/tomcat6/temp
webapps -> /var/lib/tomcat6/webapps
work -> /var/cache/tomcat6/work
Okay... They tried to organize things, but I've never seen anybody put in /usr/share on a production system. Let's go with /opt:
# mkdir /opt; cd /opt; ln -s /usr/share/tomcat6 tomcat
# ln -s tomcat $(cd /usr/share/doc; ls -d tomcat6-*)
# ls -l | awk '{print $8" "$9" "$10}'
tomcat -> /usr/share/tomcat6
tomcat6-6.0.18 -> tomcat
Time to fix the manager. Web manager will ask for the user the authenticate, even though not user is allowed, by default.
# cd conf; grep manager tomcat-users.xml
One of the lines should show the user "tomcat" with the role of "manager". Notice the line is commented. Obviously we un-comment the line to allow a manager. We should now be ready:
# service tomcat6 restart
Hit the manager at something like:
http://tomcat.example.com:8080/manager/html

Sunday, May 16, 2010

Fedora 10+ Kernel Modesetting (KMS)

Boring backstory: I tend to buy computers in sets, so currently, my three primary R&D machines are HPs: a set of twins, and a more powerful third. That one is now loaded with ESX, so my main Linux desktop is one of the twins. Last week, its hard drive died and I just got the replacement. Now to the real story...

I'd been running Fedora 6 or 8 to work with Xen. That project has been finished for several months, so when I installed the new drive, I loaded 11. I found, however, that I could not get the Gnome desktop to run at better than 1024x768. I had run 11 before without problems using the same monitor at 1600x1200-- but that was on "third", who is now running ESX.

I checked the twin out, and the card should have been able to run 1280x1024. I could get system-config-display to specify 1280, but the desktop would always drop to 1024. After investigating the problem, I found the culprit was KMS, or kernel modesetting. (Yes, its one word.)

The idea is that the kernel, who owns all the hardware anyway, will decide the best resolution, and the software will do as it is told. Unfortunately, it works with Intel, plays nice with nVidia, but there are a few issues with ATI. Turns our, third was nVidia, and the twins are ATI.

A feature that is closely tied to KMS is the new boot progress screen called "Plymouth". Without KMS, Plymouth is just a three color progress bar. With KMS, its a blue sun projecting solar flares. For these ATI Radeon machines, no Plymouth. This is because KMS isn't reverse compatible. As a result, Gnome looked to the kernel for the correct resolution, kernel said "don't know", and so the desktop could not be made to exceed 1024.

In the end, the solution ended up being to add a Grub argument:
nomodeset
Still no Plymouth, but when Gnome asks the kernel for the correct resolution, the response is "decide yourself". Worked for me. Other possibilities, any one of the following:
vga=795
radeon modeset=0
radeon modeset=1

Good reading:
Plymouth Graphical Boot
How To Enable Graphical Boot with Plymouth

Monday, May 10, 2010

Sudo Read Only All

I had a friend with an interesting problem: They had replicate a set of configuration files on one Linux machine to another, but she didn't have root on the old box. Thus, she couldn't read files like the /etc/securettty file, which was permission 600.

Here's where life gets strange... The customer didn't mind her looking at the box, they just didn't want her changing anything. The best way to make sure she doesn't change anything is to not give her sudo.

Rock --> You <-- Hard place.

Solution: /usr/bin/less is a read only command so lets just sudo it! Unacceptable, as there is a thirty year old hack that lets you bang out of less to a command line, sayeth information security. Easy enough to fix...
echo "username ALL=NOEXEC: NOPASSWD: /usr/bin/less" >> /etc/sudoers
The NOEXEC: prevents the "bang hack" and allows full system visibility.

Thursday, May 06, 2010

Splitting MPEGs On The Command Line

I was cleaning out the basement, and came across a box of old VHS tapes. Needless to say, they went in the go to the dumpster pile. Along with a VCR, an old video capture board, and a PIII. Then it occurred to me: hey, that's a video encoding system sitting in the garbage heap.

A couple hours later, everything was assembled, and I transferred my first tape. A problem, though: it was too much effort to get the file to start and end at the right place. I spent some time screwing with some of the worthless video editing software, when I found a couple posts that solved the problem.

And, yes, its a command line solution. Your GUIs are so over rated.
ffmpeg -vcodec mpeg2video -r 29.97 -b 2000k -ab 224k
  -i Cap00.mpg -ss 00:00:37 -t 2:06:30 jurassicPark.mpg
These settings will take an input file encoded at the cards native settings, and chop off everything before 37 seconds and after 2 hours (plus change). I used Mplayer to get the time values.

Sunday, May 02, 2010

Norton AV Products Still Suck

I've always hated Norton products. McAfee is way more efficent. But, Comcast decided they wanted to switch the free AV product to Norton. They probably saved a nickle doing it. So, here's what happens when you use Norton (other than your machine running slow...)

I got this pop-up:Bad news. I ran a full scan. Nothing. The message returned. Reboot, update, disconnect from the network, scan, clean. The message returned.

For lack of any other option, I clicked "Get Help". Eventually, I was thrown into a chat session with an "analyst". After some discussion, he determined I was needed to upgrade my software from 3.x to 4.x, which seemed strange. Is he telling me that v3 is known to report bogus infections?He never said "yes, v3 has a bug," but he did say there was no virus, and the upgrade would stop the messages.

Suck.

Saturday, May 01, 2010

Google Earth Browser Plugin

The Google Earth browser plugin is an interesting extension of Google Maps. You can now find a location on the map, click over to an satellite view, and then extend it into a 3-D model. For downtown Washington, DC, its similar to being in a massive multi-user domain (MMUD).
That would be a first person shooter, for the children in the audience... under 30.
The cool part is that you can see the automobiles on display inside the Verizon Center and the Wizards on the outside jumbo-tron. The detail is so good, you can make read the hours on the door of the Chipotle's restaurant.

What is strange is how much territory does not exist. Go one block north to Chinatown, and the arch is not there. There are entire blocks that are missing. So this got me thinking: What determines what shows up?

A few hints. The Verizon Center always has a giant movie poster, in the MUDD, its for a "The Heartbreak Kid" that came out in October of 2007. In Stree View, its GI Joe, which came out in August of 2009. This implies that Google Earth does not depend on Street View.

I'm afraid the system may depend on crowd sourcing. It is up to the community to model the buildings. This poses two problems. First: what if someone chooses to model the buildings wrong. Second: if they are expecting me to model my own house, it isn't going to happen.

Not only would it end up looking like an MC Escher print, but I just got too many other things to do. Nothing important mind, you. It not like I've got a life, or anything.

Sunday, April 18, 2010

History of the US Moon Base

As you know, we are approaching the twentieth anniversary of man's permanent habitation of the moon. Yes, ever since President Reagan insisted America implement a moon colony by 1990, astronauts have lived at Moon Base. This Nasa site chronicals the designs for a permanent moon habitat of the decades. Unfortunately, you can't zoom in on the pictures.

This is apparently part of a series designed for High School students to teach the history of space exploration. The top level page includes a history of the shuttle and discussion of a Mars mission.

Thursday, April 15, 2010

Password Change Policies Do Not Enhance Security

In another example of security through "because we say so," there is a recent study that indicates changing passwords does not enhance security. The premise of the argument is that if the bad guy's compromise an account, they will exploit it immediately, rather then hang on to a password for some future use.

Saturday, April 10, 2010

Started Japanese, Ended American

There was a Japanese Street Festival in downtown, but it was waaaaay too crowded. So, I went to the National Portrait Gallery instead... Which is in Chinatown... Which has sushi bars... So I got better Japanese, for less, without the long lines.

Win.

Wednesday, April 07, 2010

Google Earth Vehicle Shoots Self... Sort Of

Ultra-geek time. I was on Google Maps Street View looking for the hot dog guy on the roof of Polock Johnny's in Baltimore. The sun was at just the right angle, and you can see the shadow of the Google Earth vehicle. Notice the periscope. Cool.


View Larger Map

Monday, April 05, 2010

Cherry Blossom Firworks Pictures

Not really, but pictures from Saturday evenings fireworks excursion.  Ah... no fireworks pictures. (I don't know how these got lost: they were suppose to post Saturday night.)


Sunday, April 04, 2010

Cherry Blossom Fireworks Fail

I got lost on the way to watch the DC Cherry Blossom Festival Fireworks last night. I'm not sure its completely my fault... So did 10,000 other people.

The paper said the fireworks would be part of the music festival going on at the Southwest Water Front (A) at 7th and Maine. The crowd assumed it would be at the Tidal Basin, and collected on its shores (B).


The problem with the waterfront, is that it is highly developed, so there are few places to kickback away from the crowd. I decided I would watch from Potomac Park (C) and have a picnic.

To get to the park, I had to foot it in through the crowd from the Smithsonian Metro. I made a wrong turn, and ended up "on the wrong island" (D). The good news was no crowd. The bad news was trees and bridges obstructing the view.

I tried my Sprint PDA's navigation system-- it said I was at the Pentagon Lagoon.

Better luck next year.

Saturday, April 03, 2010

OpenNebula- Red Hat Xen Node

For the record, if your are running Red Hat, consider making the move to KVM-- That is where they will be focusing their attentions. This is not to say that Xen is dead, since it is re-included in Fedora 12. Having said that, my test cluster is old hardware that won't support KVM, so I'm having to do my OpenNebula development on Xen.

To test Red Hat functionality, I needed to build a Xen node. I could have used CrapOS... I'm sorry that was a type... I meant to say: I could have used CentOS for this test, but we all know how stupid that would be.

What follows is a little black magic used only for testing. These are the steps needed to take an @base install on 5.5 and get Xen running. Since yum sometimes gets confused on this process, it is best done directly off the CD or a mounted image:
rpm -Uvh Server/kernel-xen-[0-9]*.rpm
rpm -Uvh Server/bridge-utils-[0-9]*.rpm
rpm -Uvh Server/xen-libs-[0-9]*.rpm
rpm -Uvh --nodeps Server/SDL-[0-9]*.rpm
rpm -Uvh VT/libvirt-[0-9]*.rpm
rpm -Uvh VT/libvirt-python-[0-9]*.rpm
rpm -Uvh VT/python-virtinst-[0-9]*.rpm
rpm -Uvh VT/xen-[0-9]*.rpm
sed -i "s/default=1/default=0/" /boot/grub/grub.conf
reboot
Note: the nodeps argument is to avoid complaints about sound drivers. Since we are building a cloud node, we don't care 'bout no stinkin' sound drivers.

Now, the dependencies for OneNebula:
yum install -y ruby xmlrpc
Unfortunately, there is one dependancy missing from the Red Hat disto, so we have to grab it from Fedora. Due to some glibc versioning issues, we go back to F8:
wget http://archive.fedoraproject.org/pub/archive/fedora/linux/updates/8/i386.newkey/xmlrpc-c-1.06.31-2.fc8.i386.rpm
rpm -Uvh --nodeps xmlrpc-c-1.06.31-2.fc8.i386.rpm
And lets try Open Nebula:
rpm -Uvh one-1.4.0-1.i386.rpm
Preparing...   ########## [100%]
  1:one           ########## [100%]

Wednesday, March 31, 2010

Fire Breathing Ruby Goldberg Machine


They say this is part of a bigger video that runs 30 minutes.

Tuesday, March 30, 2010

OpenNebula Cluster

My OpenNebula cluster is up an running, and all seems to be stable. There were a few fits along the way, but must was probably due to the crap hardware I'm running this thing on. Understand, of course, that this is not a production cluster, so the fact that it runs on four Pentium III's can be forgiven.

The thing to understand about OpenNebula, is a cloud environment, not a virtualization platform. This means that we need to choose an OS first. Because of the hardware, I'm using Fedora 8 with Xen. (I prefer Fedora Core 6, since it is more like RHEL 5.2, but it was not stable with OpenNebula. Lesson learned: use F8.)

I did a base kickstart on the nodes, to ensure a slim footprint. I loaded the Xen kernel and libraries, but left off virt-manager to avoid the overhead of an X server. In my cluster, three of the four nodes are identical, but the fourth is more powerful. That will be our head node.

When I tried to load the one-1.4.0 rpm on the head node, I ran into dependencies. (Ah, yes: the download says its for F11, and I'm using F8.) Needed packages:
yum install -y xmlrpc xmlrpc-c
yum install -y ruby
This extracted to the /srv/cloud/one directory.

The RPM created accounts:
/etc/passwd: oneadmin:x:512:903
/etc/group: cloud:x:903
The users's home directory is set to the containerized directory created above. A little simple sysad magic to clean up the account and assign keys to allow oneadmin the SSH to localhost without a password.

And here is where it gets beautiful: We now use NFS to export the directory to the nodes. Log in to the nodes, mount the NFS share, and replicate the user accounts. (Note to self: add user and group to NIS.) With the mount in place, return to the head node, become the oneadmin user, and SSH to the node. Since the user's home is the share, and keys are in the share, we get right in.

The infrastructure is in place, now its on to VMs.

Sunday, March 28, 2010

SVN Ignore

Okay... Spring's here, no more snow, I've gotten out of the house two weekend's in a row. Time to get back to work.

I keep forgetting how to tell SVN to ignore a directory. This is a big deal for me, since I have a habit of creating working directories outside of my server's web root, then moving the files once they are validated. It's a security thing, but it means my dev serves always have allot of dead files laying around. Here's how to ignore the working directories.

First, ensure your EDITOR environmental variable is set:
export EDITOR=vi
(Add this to your ~/bash_profile if needed.)

From SVN root, execute:
svn propedit svn:ignore ./target/dir
This should toss you into the editor. Add the appropriate bash style wildcards representing file names and types. For my, a simple * (asterisk) usually suffices. Save the file, and run and svn status to verify the results.

Thursday, March 25, 2010

Napoleon

The National Gallery of Art's highlight exhibit is French works. I often wonder how much of our mass media mindset is influenced by these works. Maybe Napoleon only stuck his hand in his vest just this once. This portrait is almost life sized!

There were other French paintings by Matisse, Monet, Toulouse-Lautrec. In the same collect, a couple by Picasso and Van Gough.

Sunday, March 21, 2010

Spiceworks IT Dashboard

I spent a few hours playing with Spiceworks, a product that provides a web based infrastructure management and monitoring platform. Its a interesting and versatile product. Most unique is the fact that it is offered at no cost: its free as in beer, but not free as in speech.

Spiceworks installs as a service on a Windows system on the network. (I used a VM, of course.) It launches a network discovery process, classifies devices, and presents its information through a web interface. Since it is web based, the service can be accessed and managed from an workstation on the network, including Linux.

So, how well does it work? Oddly, my biggest problem was its failure to properly identify Windows systems. Once discovered, Spiceworks prompts for a username and password to be used to access the system. This is kind of scary, since the product is not open source. Unfortunately, most "home" Windows systems do not support remote access. To further complicate things, my Windows systems that do support RDP were shown as "no ports open".

On the Linux side, it was actually quite impressive. It provides graphical versions of df -h as well as a trend chart of disk usage over time. There are also IP and ethernet reports. For the ESX server, it provided a report on running VMs. As for Citrix Xenserver and Linux Xen... nothing.

I can see that the product could have value for a small enterprise: 100-200 people, with an IT staff of 5-15. Anything larger and it would be blocked by firewalls. Anything smaller (like me) and it doesn't really add value. Most importantly, the product is "advertiser supported". There needs to be an ad free subscription version.

Saturday, March 20, 2010

Health Care Protest

Health care protest today. I guess the good news is how few Kool-Aid'ers exist (though this is probably not even .01%). But the thing is: this protest was pretty wimpy, as they go.

Personally, I think the topic is a distraction at time when we need jobs. These idiots are too deluded to understand. They were too busy chanting about socialized medicine and government financed mass abortions to be bothered doing something worthwhile.

Oh well, at least they stimulated downtown's weekend economy.

Golden Gun, Circa 1650

You got to be severely piss at somebody to shoot them with a gold plated flit lock pistol. Can you imagine parading through the piazza in tights, a velvet tunic, and this baby tangling from you hip. Dude, you be pimpin' big time!

(Museum of the American Indian, Washington, DC)

Thursday, March 18, 2010

More snow at RFK?


The snow piles at RFK are getting bigger!

Actually, they are moving the snow from the other parking lots and have consolidated it into Lot 8. In some parts of the lot, the piles are 15 to 20 feet... within five foot of the tops of the light posts. Sorry for the slant and blur in the picture: it was snapped from a moving vehicle.

Thursday, March 11, 2010

That's Where All The Snow Went

Remember that big snow we had a month ago? When DC plowed the streets, they trucked the snow to RFK stadium. These piles of snow (yes, they look like dirt) have melted do to about eight feet tall. And this is parking lot #8... The others are full, too!