Sunday, December 30, 2012

Defeating Facial Recognition

I saw a web advertisement from Merrill Edge Investments (not to be confused with Merrill Lynch... a risk latent, greedy, delusional, Wall Street investment firm that helped firm cheat millions of people out of their retirement earnings) for a new investment tool called Face Retirement.  The concept of the ad campaign is based on the work of Daniel Goldstein, PhD, who says people fail to invest for retirement is because they "can't see themselves as old."  His research indicated that if you showed someone a picture of what they would look like at age 65, it would motivate them to spend the next 15, 25, 35 years preparing for reaching that age.

I didn't believe it when I heard him say it the first time, but I salute him for managing to bilk the Wall Street suits out of the money to implement this as a web app.  I'll tell you why its a stupid idea in a few minutes, but first the fun stuff.

Merrill Edge implemented a facial recognition system that implements an aging feature.  You are suppose to use your web cam to center your face in an oval and snap a picture of yourself.  I decided to use this to demonstrate why facial recognition is not implemented in the wild.  You see, the basic concept that most people fail to understand about facial recognition is the question: What is a face?  Consider our first example.
As is obvious, this is a face-- it just happens to be the face of a dog (and not even a real dog, at that.)  The software successfully denied access, on the grounds that the face was not properly formatted.  In the second example, the application was presented with a "more" human face.

Again the software successfully denied access.  It did specifically recommend I remove my hat. 

Option number three was also a failure, thought this is a properly formatted, obviously human face.  Unfortunately, it is not a "real" face, but a picture from an AllState Insurance brochure.  If I owned stock in a facial recognition company, I might stop here, trumpeting how well the product had discarded this obvious attempt at trickery.  But I don't own any stock.  I can't remember why not... Oh, yes, thieving Wall Street scum bags.

But I digress.  Attempt number four:

Success!  I was granted access, using a photo of face.  Why did face three fail, but face four succeed?  Notice that face number three is not looking into the camera, but face four is a full frontal view.  This allowed the software to properly align the eyes, nose, and mouth.  The senorita from the cover of the "Instant Immersion Spanish" box does not have ears, but male model did not have eyes (they are closed.)

The fun thing about facial recognition is that most can be tricked by a photograph and the few that cannot, are usually tricked by a mask.  In all production quality systems, additional safeguards (heat sensors, echo location) have to be implemented to override these simple hacks.

As for why this idea of showing someone an aged image of themselves is stupid...  Its a short run fix.  Goldstein researches a favorite subject of mine, decision theory.  He indicates that we postpone long range decisions, because we do not see them as relevant.  By demonstrating aging, he hopes to bring a sense of reality to the abstract concept of time.  This works only until the car needs new tires or the muffler replaced.  No one is going to sit on the side of the road, replacing a bald, dry-rotted, flat tire, and say to themselves:
I've got one spare tire.  If this tire blew, the others are bound to go at any minute.  I can spend $500 on four new tires, or I can put the $500 toward retirement.  Would I rather have a safe car that I can drive to work, or I can make life better for "future me".
No.  The decision is simple: Replace the tires, screw "future me".  Short run always wins over long run, because "In the long run, we are all dead."  Which raises the question as to why Merrill Edge would spend the money on such a tool?  Are they altruistically trying to change human nature?

No.  Like I said... Its a short run fix.  The purpose of the tool is to "help you make a long run decision", knowing full well that short run realities will overcome the tool's effectiveness.  But by then, they've got what long run money you had available at that short run moment.

At isn't that what really matters.  That... and using stuffed dog puppets to validate new technology.

Saturday, December 29, 2012

No Reserved Words in XML

Or so goes the mantra, but if that's true then why can't I use this syntax:
<?xml version="1.0"?>
<parse>
    <rule id="1">
      <type>pattern</type>
      <description>might be time value</description>
    </rule>
</parse>
It passes all my tests for well formed XML.  Why can't I use it?

Well, obviously, I can... so here's the story.  I've got some Perl code that I've used dozens of times before, but all of a sudden didn't work with this data.  After scoring the code and the the internet for a reason that this data structure wouldn't work, I finally changed the data to read , and everything worked fine.  This would seem to be a happy ending except for on small detail:  Its not my data.

To better understand what's going on, let me explain "doesn't work."  There is a Perl module called XML::Simple.  It combines dozens of steps into three lines:
use XML::Simple;
my $xml = new XML::Simple;
my $data = $xml->XMLin("file.xml");
These lines open the file, read the lines, parse out the tags, and assign the values into a dynamically allocated hash.  By adding a call to a Data::Dumper, we can look at the hash structure of the XML data:
$VAR1 = {
  'rule' => [
    {
      'id' => '1',
      'type' => 'pattern',
      'description' => 'might be time value',
    },
Or that's how it should break out.  Instead, it breaks out like this:
$VAR1 = {
  'rule' => [
       '1' => {
           'type' => 'pattern',
           'description' => 'might be time value',
        },
Which broke my normal subroutines.  Yet, if I change "id" to "item", everything works as expected.  So if its true that there are no reserved words in XML, why doesn't it work?

It turns out, for some bizarre, undocumented reason, the Perl XML::Simple module has decided that there are reserved words in XML.  And those words are name, key, and id.  If those words are found as tags in an XML structure, they are promoted to elements. Though CPAN does not explain why this is the case, they do provide a solution:
my $data = $xml->XMLin("file.xml",KeyAttr=>[]);
By setting the option KeyAttr to "none", the parser behaves as it should.

Monday, December 17, 2012

dracut: FATAL: initial SELinux policy load failed

Here's an obnoxious install failure: Using the RHEL/Scientific Linux 6.3 DVD, it is possible for an install to crash on first boot with an SELinux error.  The problem is a bug in the post of the target policy RPM.  The bug is immediately fixed by running yum update... assuming you can figure out how to get the machine booted.  Luckily, the error is nice enough to tell you how move forward.


Reboot, and at the GRUB menu, append "selinux=0" to boot into Permissive mode.  From the root prompt:
ls /etc/selinux/targeted/policy/policy.24
ls: cannot access /etc/selinux/targeted/policy/policy.24:
No such file or directory
If possible, issue: yum update selinux-policy-*

If the machine is not network connected, the problem can be resolved by restoring the file policy.24 from install media.  And it it were that simple, you wouldn't need me.  You will have to force install two RPMs:
rpm -ivh --force selinux-policy-3.7.19-*
rpm -ivh --force selinux-policy-targeted-3.7.19-*
The second force install will take several minutes to complete.

Regardless of how the issue is resolved, it is best to relabel the filesystem:
touch /.autorelabel


Thursday, November 22, 2012

RHEL Cluster Anti-Affinity Configuration

I'm often amused by how vendors define "High Availability", aka HA.  Customers always talk about Five Nines, but "off the shelf" HA solutions seldom achieve 99% availability.  In the case of RedHat's HA cluster service, the default configuration might provide unattended failover within 60 seconds.  Given that Five Nines only allows 25.9 seconds per month, a single failure can blow away a service level agreement.

To counter the real-world lag of application failover, an system must be load balanced across a cluster.  A real HA environment would be at least three nodes, running two instances of the service.  If one node fails, the load balancer will redirect everything to the second instance, while the service is recovered on the third node.

There is an HA problem that VMware has addressed in their HA solution, that RedHat has not, known as the anti-affinity rule.  Affinity is when two "processes" favor the same resource.  An example would be when running a web and database instance on the same machine improve performance.  In the case of redundant services, running them on the same machine is pointless, if the machine fails.  To prevent this, we need an anti-affinity rule that requires the two processes to never be on the same machine.

RedHat cluster suite provides affinity in the form of child services.  If the cluster moves the web service to another node, the database has to follow.  What they don't provide is an anti-affinity rule to prevent the load balanced services from trying to run on a single node.  As a matter of fact, by default, all services will start on the same cluster node.  (It will be the node with the lowest number.)

I found I could implement anti-affinity from with in the service,s init.d script.  First, we add an /etc/sysconfig/ file for the process, with the following variables:
CLUST_ENABLED="true"
CLUST_MYSERVICE="service:bark"
CLUST_COLLISION="service:meow service:moo"
A collision is when the presence of a service prevents this service from starting on this node.  The names should be listed exactly as they appear in clustat.  Make sure the script sources the config file:
# source sysconfig file
[ -f /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog
Next, add a new subroutine to the existing init.d script:
cluster(){
  # look for other services on this host
  K=$(for J in $CLUST_COLLISION; do \
          clustat | grep "$J.*$HOSTNAME.*started" \
          >/dev/null; \
          [ $? == 0 ] && echo "$J "; \
          done)
  if [ $K ]; then
    # show service names
    echo -n "Cluster, collision $prog: $K"
    # fail, but with a success return code
    failure; echo; exit 0
  fi
  # look for this service running on other nodes
  K=$(clustat | grep "$CLUST_MYSERVICE.*started" | \
          awk '{print $2}')
  if [ $K ]; then
    # show hostname of other instance
    echo -n "Cluster, $prog exists: `echo $K | cut -d. -f1`"
    # fail but with a success return code
    failure; echo; exit 0
  fi
}
Finally, add a reference to the cluster sub in the start sub:
start(){
  if [ $(ps -C cluster-$prog.sh | grep -c $prog) == 0 ]; then
    # only check cluster status if enabled
    [ "$CLUST_ENABLED" == "true" ] && cluster
    echo -n "Starting $prog"
Here's what happens in the case of a collision:
  • rgmanager issues a start
  • the cluster sub recognizes the collision, but tells rgmanage that it started successfully (exit 0)
  • rgmanager shows the service as running
  • 30 seconds pass
  • rgmanager issues a status against the service, which fails, since the init.d script lied about the service running
  • the cluster orders a relocation of the service
  • rgmanager issues a start... on a different node
  • there is no collision this time, so the init.d runs as expected

Thursday, November 08, 2012

RHEL 6 Clustering, VM Fencing

I recently retasked one of my lab machines as a RedHat virtualization server, which RedHat calls RHEV, but is really KVM.  One of this machine's tasks is to support a test cluster of VMs.  Under normal circumstances, clustering would require a remote management interface such as an ILO, DRAC, or RMM.

As usual, I was disappointed with how difficult this was.  To make matters more difficult, for you, I won't be covering clustering in the article.  This document's scope will be limited setting up to RHEV VM fencing.

On the host machine, we need to install the fence daemon.  Considering this is very lightweight, the I'm going to do a shotgun install:
yum install fence-virtd-*
On my machine, this loaded four packages: the daemon, the interface between the daemon and the hypervisor, and two "plugins".  (The serial plugin is probably not needed.)

The base RPM will provide the /etc/fence_virt.conf file.  Modify it to look like this:
listeners {
  multicast {
    family = "ipv4";
    interface = "virbr0";
    address = "225.0.0.12";
    port = "1229";
    key_file = "/etc/cluster/fence_xvm.key";
  }
}
fence_virtd {
  module_path = "/usr/lib64/fence-virt";
  backend = "libvirt";
  listener = "multicast";
}
backends {
  libvirt {
    uri = "qemu:///system";
  }
}
Two things to notice about the config file.  The key_file option is little more than a password in a text file, which is going to have to be duplicated on all the VMs in the cluster.  The "theory" is that only a device with the password will be able to fence other nodes.  This brings us to the second point, the multicast option.  If a cluster node issues a fence command, the symmetric authentication key will be multicast on the network in the clear.  Thus, the reality is that the key_file provides no real security.

Which brings us to a second issue with the multicast.  Per RedHat, cross host fencing is not supported.  As such, all cluster nodes have to exist on the same physical machine, rending real world VM clustering pretty much worthless.  Here's the reality of cross host fencing: It is not supported because of the security concerns of multicasting the clear text fencing password and the fact that RedHat cannot guarantee the multicast configuration of the switch infrastructure.  Given properly configured switches, a dedicated host NIC and virtual bridge in each host, cross host fencing works.  In this lab configuration, however, it is not a concern.

After creating a key_file, open the fenced port in IPtables:
-A INPUT -s 192.168.122.0/24 -m tcp -p tcp --dport 1229 -j ACCEPT
Copy the key_file to each clustered VM (they don't need the config file) and add the opposite IPtables rule:
-A OUTPUT -d 225.0.0.12 -m tcp -p tcp --dport 1229 -j ACCEPT
On the host, chkconfig and start fence_virtd.  Running netstat should show the host listening on 1229.  What it is listening "for" is the name of a VM to "destroy" (power off.)  This means the names of the cluster nodes and VMs recognized by KVM/QEMU have to match.  On the host, display the status of the VMs using: 
watch virsh list
Given a two node cluster, on node1 issue:
fence_node node2
On the host, the status of node2 should change from running to inactive, and a moment later, back to running.  For testing purposes, the fence_node command can be installed on the host, without the host being part of the cluster.  If you try this using yum, you'll get the entire cluster suite.  Instead, force these three RPMs:
rpm -ivh clusterlib-*.x86_64.rpm  --nodeps
rpm -ivh corosynclib-*.x86_64.rpm  --nodeps
rpm -ivh cman-*.x86_64.rpm  --nodeps

 Truthfully, the better choice is to build a VM to manage the cluster using Luci.

Sunday, November 04, 2012

Kickstart from Hard Drive ISO

I'm building a machine that may need to be remotely re-imaged, without the benefit a kickstart server.  I've always heard that you can kick a machine from itself, but had never tried it.  Truthfully, it's probably more trouble than its worth.  The best option would be to install a DVD drive with media, but configure the BIOS such that DVD is after the main drive.  Since I didn't want an optical drive on this machine, here's how to kickstart a machine from a hard drive.

Let's do this backwards.  Get the machine to image off an HTTP server and then change the ks.cfg:
#url --url http://1.2.3.4/rhel6
harddrive --partition=/dev/sdb1 --dir=/
Notice that I'm telling the machine that its image is on sdb, not sda.  Providing two drives is safer that trying to image off the boot/root drive, but it could be a partition on the same drive.  Besides, I've got dozens of little drives laying around doing nothing.  Further down in the file, I also indicated:
#clearpart --all
clearpart --drives=sda

Next, we mount the second drive and copy the ISO into the root of the drive.  To clarify:
mount /dev/sdb1 /mnt
cp rhel-6-x86_64-dvd.iso /mnt/
When kickstarting from a local drive, we use the ISO file itself... not an extracted or loop mounted filesystem.  Looking back at the first change we made to the ks.cfg, we indicated --dir=/, so we are telling the installer that the ISO is on the top level of the drive.

As a matter of convenience, mount the ISO, because we need a few files from it:
mkdir /mnt/rhel6
mount rhel-6-x86_64-dvd.iso /mnt/rhel6 -o loop
Copy three files:
mkdir /mnt/images
cp /mnt/rhel6/images/install.img /mnt/images
cp /mnt/rhel6/isolinux/vmlinuz /mnt/images
cp /mnt/rhel6/isolinux/initrd.img /mnt/images
If you were going to allow the machine to rebuild different versions (or distros) you would want to add a version number to each file.

To initiate the rebuild, we will use the tried and true Grub rebuild hack:
cd /boot
cp /mnt/images/vmlinuz /boot
cp /mnt/images/initrd.img /boot
cat >> /boot/grub/grub.conf << EOF
title Rebuild
  root (hd0,0)
  kernel /vmlinuz ramdisk_size=8192 ks=hd:sdb1/ks/ks.cfg
  initrd /initrd.img
EOF
Many docs indicate that the second drive has to be made bootable.  In this case, we are still booting off the primary drive, but only long enough to read the ks.cfg from sdb and switch to install.img.  Once install.img has control, it will read the clearpart command, wipe sda, and reinstall from sdb.

There is a "gotcha" I've not quite worked out yet.  As we all know, Linux is notorious for renaming drive letters at boot time.  It is possible that  the machine might confuse sda and sdb.  This could be disastrous if the machine crashed, and while trying to rebuild, it wiped the image drive!  The good news is that the installer reports that it can't find the image and kickstart file, and fails.  Just reboot until it finds the correct drive.

* It would seem that either a UUID or LABEL could be used in both Grub and the kickstart file.  I'll add checking those possibilities to my ToDo list.  Or you could figure that part out and let me know which works.  Its only fair: I've already done the hard part for you.

Saturday, November 03, 2012

Citrix Xenserver: Apply Multiple Updates

As a result of reorganizing the servers in my lab, I had to reinstall Citrix Xenserver.  I should have downloaded 6.1, but decided to keep it at 6.0 and apply the updates that I already had on the NAS.  All went well with the install, I moved all my VMs and templates to this machine, and retasked the other machine.

When I went to load the updates, a funny thing happened... It refused to load more than one, and expected to reboot after each.  After a moment of thought, I realized that I had probably never tried to load two at a time before.  It seems like something that should be simple, but the procedure is not obvious.

Here's how:
  1. Highlight a server, click the "General" tab, and expand the "Updates" pane.
  2. In the "Updates" pane, notice which updates have already been applied.
  3. On the menu bar, click "Tools", "Install Software Update", and "Next".
  4. Click the "Add" button, select the lowest numbered update, and click "Open".
  5. At this point, its tempting to add another update, but don't: yet.
  6. Click "Next", select one or more servers, and click "Next".
  7. A check will be run against the update.
If the check succeeds, click "Previous", "Previous", and repeat from step 4.

If the check fails, then two things.  First, click "Cancel" and start the entire procedure over again, but don't add the update that failed the test.  Second, don't blame me-- I didn't create the interface.

Once you've added all the relevant updates, click "Next".  You'll have the choice of performing post install steps automatically or manually.  What this really means is reboot now or later.  If you select manually (reboot later,) it is possible that some of the updates will fail, but that's actually okay.  When an update succeeds, it appears in the "Updates" pane as Applied.  If it fails, it appears as Not applied.

To get activate the not applied updates, repeat steps 1, 2, and 3, but instead of step 4, highlight the not applied update.  Continue through the rest of the steps, making sure to do automatic, as recommended.

Tuesday, October 23, 2012

Linux KVM Disk Drivers

I was having a problem with storage device names on virtual machines running on a RedHat KVM host.  Occasionally, I'd build a VM and the storage device would be named /dev/sda and other times /dev/xvda.  I quickly found that if I created the VM using virt-install, I got a xvda device, and if I used virt-manager (the GUI app), I got the sda device. After some investigation, I've discovered where things went wrong.

First, the syntax of the virt-install command changed in RHEL 6, and I was still using the RHEL 5 command.  Rather than complaining, RHEL 6 would guess what it thought I meant.  Here's the wrong command:
virt-install -n server -r 512 -w bridge=virbr0 \
-f /var/lib/libvirt/images/server.img -s 10
The -f/-s options says to create an image file the is 10GB.
Here's what was implemented:
virt-install -n server -r 512 -w bridge=virbr0 \
-disk path=/var/lib/libvirt/images/server.img,\
bus=ide,size=10
Rather than complaining that the -f/-s options were deprecated, it invoked the new syntax and assumed I wanted an IDE drive, which on RHEL 6, is named as if it were an SCSI device.  We can force the paravirt driver by using the correct command:
virt-install -n server -r 512 -w bridge=virbr0 \
-disk path=/var/lib/libvirt/images/server.img,\
bus=virtio,size=10
Second, the GUI does not allow a VM's disk type to be selected from the install wizard-- it always defaults to the paravirt driver.  To force a specific driver, on the last screen of the wizard, check the box for "Customize configuration before install".

This will open a new window listing the VM's hardware.  Select "Add Hardware", select "storage", and configure a second disk the same size as the first.  At this point, there is a pull down menu that will specify the driver.  Once the second disk is in place, remove the first.  Removing the first disk before the adding the replacement disk can cause problems.

Hint: Once you've modified the hardware, there is not "Apply" option.  Just close the window and the VM will launch.


Kitchen is Getting Close


Got the cabinets-- waiting on the counter top.



















Turns out installation was not included with the vent hood, like that makes sense. What's that you say? "Nickel and dime"? Yep.

Sunday, October 07, 2012

Certified Ethical Hacker

I recently took the Certified Ethical Hacker (CEH) class and certification exam.  First, I passed.  Second, I was a little disappointed with the class.

Let's take a look at the first item: I passed the test.  How can anyone reasonably complain about passing a certification test?  Let me contrast the certification test with three other tests. 

RedHat certification tests are all hands on:  Here's a broken computer, fix it.  Generally speaking, if you have at least one year of experience with Linux, take a RedHat class, understand the hands-on labs, you can pass the test.  The ITIL advanced certifications are almost the opposite.  Unless you have several years of workplace experience making IT management decisions, the class is of little help with the certification exams.  In the case of ITIL's advanced certifications, the proctored, paper exams test your ability to apply their methods to your real-world experience.  And then there is VMware, whose certification is a multiple choice, computer based test.  A quick breeze through the VMware docs, and just about anyone can pass the test.  As such, VMware requires you to take their class before you can get the certification, which makes the VCP little more than an indicator of class attendance.

The CEH is a multiple choice, computer based, exam, like VMware's.  The difference, however, is that (having taken the class) I'm not certain I could have passed the test based only on what I learned in the class.  Even though the class is structured like a RedHat class, with lecture and hands-on labs, I feel the exam required some real world experience.

Don't get me wrong... I'm not saying that "simply" taking the class should be enough.  I do agree that a candidate should have some experience in the area of study, but I feel that the purpose of the class and labs should be to solidify what they've seen, fill the gaps in what they haven't, and help them identify where they are weak.

And this brings me to the class.

Every class starts the same way-- introduce yourself and say what you hope to get out of this class.  Most people say something like "I'm John, and I want to pass the test."  This time, I said:
I'm Doug, and for the last ten years customers have insisted that I implement obscure security protocols, but I've never seen someone demonstrate that they can successfully breach a properly configured system.  I'm hoping this class will provide some validation that there really is a threat more sophisticated than scripts looking for default passwords.
What did I learn from the class?  Three things:  Windows sucks, Linux is invincible, and once a month 10% of users should all be shot.

At this point, let me interject that the CEH course material was the highest quality training material I have ever seen.  They had color graphics, high quality artwork, no diagrams stolen from vendor brochures, the books event had spines... like a real book that you'd buy at a book store.  We got six disks worth of tools.  And goodies like a backpack and a T-shirt!  The first half hour of class was like being six years old on Christmas day.  The rest of the class was like the feeling you have after you've opened all the Christmas presents, and you realize that the its all over.

Some of the labs were interesting, but there are only so many times you can demonstrate that Microsoft has sacrificed security for usability.  After a couple days, the fact that insiders and stupid users are allowing access to the network was well worn.  There really was no need for more than one lab demonstrating that organizations expose too much information to Google and Netcraft.

I'm going to end with one last thought, and this really doesn't have anything to do with CEH.  Human beings can learn anything from books, but we like to be taught by other human beings.  An instructor provides three basic services in a class:
  1. Focus the student's attention on what is really important in the book, and identify the fluff and filler.
  2. When a student indicates they do not understand the book, they offer more detail or alternate examples.
  3. Provide value-add in the form of real world examples or relevant material outside of the book.
If you ever find yourself as a technical instructor, pay heed to what I'm about to say next:  If you can't fulfil at least one of the services above, simply being [ cool | fun | entertaining ] isn't enough.

Tracking SSH Tunnels

Native to Secure Shell (SSH) is the ability to create point-to-point, encrypted, tunnels.  The function was designed to provide legacy protocols, such as mail (SMTP/POP) with encryption.  A user could login to an SSH server in their company's DMZ, open a tunnel from their laptop to the server, and redirect their mail client through the tunnel.  On the surface, this sounds like a good idea: it protects the exchange of company data from the "protected" corporate intranet to users "in the field".

But, as with all good things, there is room for abuse.  Consider the opposite scenario:  What if a user inside the corporate intranet SSH'ed to the DMZ server and built a tunnel to allow them to surf the web, thus bypassing the content filters?

Granted, content filters are just a way for the man to oppress middle class workers.  By censoring free thought, the 1% is able to keep the 47% running on the hamster wheel of consumerism.  Hear me, my brothers!  There will come a day when the proletariat will raise up and declare their freedom from the jack-booted thugs of Wall Street and their Illuminati masters.

But I digress...  Where was I?  Oh yes, SSH tunnels. So the question is this:
How can we monitor the SSH tunnels defined on the server to ensure they are not being abused?
Much to my surprise, the answer is:  You can't.

There does not seem to be any mechanism for determining what tunnels exist, and here's why.  The tunnel is defined on the client end, where the SSH application is always listening.  When the client receives a packet that matches a tunnel, the packet is shipped to the server with handling instructions.  When the server gets the packet, it opens the needed socket, fires it off, then closes the socket.  In other words, the connection from the server to the destination is not persistent... it behaves more like UDP than TCP.

Since a socket is opened, it is possible to capture it with lsof -i, but since the socket is transient, trying to catch it in a while/do loop is a matter of pure luck.

This means we have two choices, one of which shouldn't count.

In order to catch someone using a tunnel to surf out of the DMZ, we need an IPtables rule to catch the outbound packets.  As it turns out, any packet originating from a tunnel will use the server's IP address as the source address.  We only need to log the initial connect, so we only need to log the SYN flag.  To further complicate things, our abusive user has to be using a proxy, so we can't restrict our checks on port 80 and 443.
iptables -A OUTPUT -s 192.168.0.1
  -o eth0 -p tcp --syn
  -j LOG --log-prefix "out-syn "
Here, we are looking for OUTPUT, since we are assuming that this DMZ machine is supposed to be building tunnels.  The (-s) address is the address of the DMZ machine.  In this case (-o) eth0 is the internet side of the machine and eth1 would be the intranet side of the machine.  Notice that no port number is assigned to the (-p) TCP statement.  Lastly, we are going to log this message.  (The trailing space in the quotes is significant.)

This rule will catch bad tunnels, but ignore good tunnels, on the grounds that good tunnels will use (-o) eth1 to get to the intranet resources.

If you'll recall, I said there were two choices.  The second is this:
iptables -A OUTPUT -s 192.168.0.1
-o eth0 -p tcp --syn
-j DROP
In this case, we are refusing all outbound TCP traffic from the DMZ machine.  (Since DNS is UDP, we can still resolve the addresses of the inbound SSH connections.)  As stated above, we are allowing the good tunnels, since they use (-o) eth1.

So which of the two rules shouldn't matter?  The first:  We shouldn't have to "catch" abusive users, we should just stop them.  Of course, we could use both lines to first log them and, second, prevent the connection.  This allows us to know who the abusers are, and bitch slap them for their feeble attempt-- for they are probably using Windows workstations, and deserve to be degraded.

What's that you say Mr. Boss?  You want me to prove abuse exists before locking down the DMZ.  Okay, we implement rule number 1, log the abuse, and then later lock down with rule number 2.

What's that you say Mr. Boss?  Prove the abuse exists without implementing rule number 1.  Ah...  No can do.

Oh well, if you want me, I'll be in my cube.  Listening to Slacker internet radio, via an SSH tunnel, through the DMZ.

Saturday, September 01, 2012

Good news, bad news

First, the good news: the ceiling is fixed...

..and the floor is done.

The bad news? Still no kitchen.

And on a side note, ML asked if any of the walls I knocked out were "load bearing". So far, it seems not. I guess that's good news too!

Wednesday, August 08, 2012

The Peter Principle

Everybody knows The Peter Principle as "people get promoted to their level of incompetence."

Unfortunately, that's wrong.

The Peter Principle is the title of a book published in 1969, by Dr Laurence J Peter and Raymond Hull.  Dr Peter did the research, Hull ghost wrote book.  I recently picked up a copy from The Book Thing free store in Baltimore, that was printed in 1970.  The front cover states:
In a hierarchy, every employee tends to rise to his level of incompetence.
Unfortunately, that's not really the principle, either.

The problem is this:  These concepts are introduced and explained on page seven of the book, but the book is so poorly written, that even though it was a run-away best seller, no one actually read it.  They bought it, talked about it at cocktail parties (remember... this was 1970) but they didn't actually read it.  They read the cover and mutually agreed with each other's lies of understanding. 

Its kind of like going to Las Vegas: no one want to admit they are the only person in the world who didn't get their room "comp'd", so they lie.  Thus the of "the Las Vegas comp" is a self perpetuating fable.

In reality, The Peter Principle is about the behavior of human systems, which Dr Peter calls "a new science, hierarchiology, the study of hierarchies," and the level of incompetence thing is one small part of the pie.

Have I finished the book?  No, but I am struggling through the text.  At this point its a challenge.  I shall not be defeated by a small tome of yellowed pages!

And what wonders been gleaned from my pain and suffering?  Thus far, only that Dr Peter is an elitist pig.  A slightly observant and insightful elitist pig, but a pig none the less.  My evidence?  From page 44:
[He] managed, by hard study, to master a foreign language.  It is quite possible that he would have to fill one or more posts in the company's overseas sales organization before being brought home and promoted to his final position of incompetence as sales manager.  Study created a detour in [his] hierarchal flight plan.
In other words, be a good drone and do what I tell you.  You will ultimately fail in life and efforts at self improvement will only delay the inevitable.

The Silicon Valley people have a new saying: "Fail fast.  Fail cheap."  (I hate Silicon Valley people.)  The logic is that if you are going to fail, it is better to do it sooner than later.  This use to be called "cutting your losses."  For Silicon Valley people, its about recognizing failure on the horizon, accepting that you have over-reached, and moving on to your next meal ticket.  But this is not what Dr Peter is proposing. 

For Dr Peter, it is about recognizing that you will fail, giving up on any chance of enjoying what you're doing, and moving as quickly as possible to the job that you will inevitably hate, where you will spend the rest of your life being ineffective, and suffer the scorn of your co-workers.

Wow.  I just realized that not only is this book poorly written, but its depressing, too.  No wonder nobody read it.

Tuesday, July 24, 2012

Stupid Yahoo Password Criteria

For about a week, I've been wrestling with my Yahoo! password.  My old, but still functional, Palm Centro mobile phone has an app to connect to Yahoo mail, but it recently stopped working.  Given that it failed the day after I changed my password, one might claim that it was a self inflicted injury, but no...  it was Yahoo's fault for storing 450,000 passwords in clear text which, of course, got hacked and published.

The smart thing was to change the password.  What Yahoo failed to explain was that in order to be able to login to your account on their mobile site, you have reset your password from a desktop computer, using the password requirements for the mobile site.  Unfortunately, the password criteria checker they use is Javascript, and it is not configured with the password criteria used on the mobile site.

Bottom line:
You can use special characters !@#$, but not %^&*.
My password contained the percent sign.  I could login from my Windows and Linux machines using IE or Firefox, even using the m.yahoo.com URL to force the browser to the mobile site.  I could not login from my Palm Centro across the SprintPCS network using either the mobile browser or mail app.  Just to prove that this was not a Palm problem, I also could not login from my Android E-reader tablet.

As soon as I changed my password to use use a "good" special character, rather than a "bad" special character, all previously denied devices worked.

Friday, July 06, 2012

Use awk To grep "this but not that"

I've run into this situation several times over the decades, bit for some reason I never researched an elegant solution. Consider the case of grep'ing to see if a process is running. The simple solution is:
ps -ef | grep "ntpd" 
The problem is that if there is one process matching the regex, this will report two processes, because it will also report the grep process that is grep'ing the process stack. Its kind of like taking a picture of yourself in the mirror. The generic solution to this is:
ps -ef | grep "ntpd" | grep -v "grep" 
In other words, lets launch another grep to grep out the grep. This is only slightly less efficient than taking a picture of yourself in the mirror, then Photoshopping the camera out of the picture.

Today, I found the elegant solution, and its... awk to the resuce!
ps -ef | awk '/ntpd/ && !/awk/' 
Here, awk is taking the stream and searching for a line that has ntp and (&&) does not (!) have awk.

Thursday, June 21, 2012

Ceiling Fan Effectiveness

While investigating why my home is not cooling evenly, I ran a quick test.  Using my infrared thermometer, I ran some scans on two rooms.  The test was done at noon, with the air conditioner enabled, set on 85.  The outside temperature as measured by the air conditioner (in the shade) was 89.

In room one (R1) the temperature at the junction of an outside wall, an inside wall, and the ceiling, was 85.9 degrees Fahrenheit.  In the room two (R2) the temperature at the junction of the same outside wall, same inside wall, and same ceiling, was 86.4 degrees.  Needless to say, a temperature variation of half a degree might seem negligible, certainly within an acceptable margin of error, except for one thing:

In R1, the ceiling fan had been running four hours.

This raises an interesting question, if running the ceiling fan for four hours only cools the room by half a degree, is it really working?  Yes, because its not a ceiling fan's job to cool a room: its job is to circulate air.

But wait!  It get's more interesting. 

The temperature of the fan's motor housing was 97 degrees.  This means that the fan was acting as a heat source.  Of course the ceiling fan is equipped with a light kit.  With the light illuminated for one hour, the temperature of the globe around the single incandescent bulb increased from ambient to 95 degrees.  This means the light was also acting as a heat source.  (The four bulb "tulip" light kit in R3 was 103 degrees, using CFL bulbs.  The three bulb halogen fixture in R2 was 118 degrees.)

This experiment causes me to question the value of ceiling fans.  In both R1 and R2 the temperature at the baseboard was 81 degrees.  Conventional wisdom dictates that the fan in R1 would pull cooler air up, or push hotter air down.  By circulating the air, the room would be more evenly cooled.  Yet, the measurements indicate the effect of circulation is not significant.  On top of costing electricity to operate, its possible that the fan is adding heat, not subtracting.

So why do people use ceiling fans if they don't work?  One word: breeze.  People think ceiling fans work, because they can feel a breeze, which seems to have a cooling effect.  (In reality, a breeze is only effective on bare skin because it assists evaporation of perspiration.)

Thus, I will conclude with this philosophical question:  
If a ceiling fan is running in a room,
and there is no one around to feel it,
does it do any good? 
No.

Monday, June 18, 2012

Tear Down The House

To those that suggested that the solution to uneven cooling of my home was to tear the house down... I'm way ahead of you. A couple weekend ago, I took out a ceiling and two walls.

The ceiling had to come out:  There was a leak in the upstairs plumbing that had caused the ceiling to sag in two places.  Once the leaks were fixed, a the ceiling was "repaired" by a couple of "professionals".  And then it sagged again.  So, out it went.

And while you're taking out the ceiling, might as tear down some walls.  The red line represents a wall that separated the everyday dining room from the formal dining room, which has spent the majority of the last five years as a storage room.  The purple line represents the wall that separated the kitchen from the formal dining room.  The yellow circle is around the light switches, that were in the walls, and now just hang down for easy reach.

The big black thing in the background is the Big Fucking Refrigerator (BFR).  The difference between a BFR and a refrigerator is that a BFR damages doors and walls when it goes in the house.  The BFR did sit in the corner where the red and purple walls had been.  Now its against the outside wall.

Sunday, June 17, 2012

Home Not Cooling Evenly

I have a running joke about my house:  I think they installed the insulation backwards, because in the winter, the house is colder then it is outside, and in the summer it is hotter.  Though not completely accurate, I have always been perplexed by the thermal dynamics of this property.

Lets start off with some background.  Like most of the Washington, DC, area (and east coast cities) my home is a "town" house.  Generally speaking, a town house is wider than a "row" house.  A town house is typically two rooms wide, whereas a row house is one room wide.  In both cases, the home would be several stories tall and several rooms deep-- potentially half a city block deep.  (In Tennessee, a house that is one room wide and several rooms deep is called a "shotgun" house, because you could shoot your shotgun in the front door, and blow out the back door.  Sigh...  Tennessee was such a fun place.)

In my case, when you walk in the front door, you can turn right to the living room, walk down a flight of stairs to the master suite, or walk up half a flight to the kitchen, dining room, family room.  So, that's two and a half stories.  From the downstairs master suite, you can go down another level to the computer bunker (ie: basement.)  From the kitchen, you can go up a story to more bedrooms. Thus, we're at four and a half stories.  Turns out there is enough room in the roof, for another 15x17 room, but its never been built out.

The dominate feature of the town house, is the pseudo-spiral staircase.  I call it pseudo-spiral, because its a rectangular box, the size of an elevator shaft, running from bottom to top.  There are three stairs, a set of four stairs at 45 degree angles, then three stairs, and two stairs at 45 degrees.  (If your not good at geometry, you just climbed one story and turned 270 degrees, so you are facing to the left of where you started.)  There is a small landing, and then another sequence of steps.  Repeat this once down and twice up.

I call it the M.C. Escher house:
 

But here's what you really need to know about the house:  Using my new Harbor Freight Non-contact Laser Thermometer, I have verified that at the lowest climate controlled point of the house, to the highest climate controlled point in the house, there is a thirty degree temperature variation.

Yes, that right: 30 degrees Fahrenheit!  With the thermostat set on 85 degrees, and an outside temperature of 75 degrees, the baseboard of the master suite is 65 degrees.  The largest bedroom upstairs has a 12 foot vaulted ceiling.  At the peak of the ceiling, the temperature is 95 degrees.  And these are ambient temperatures-- the AC is not running!

First questions: Why such a huge temperature variation?  Simple answer:  Heat rises, so the stairwell acts as a ventilation silo that allows all the hot air to rise, and cool air to sink.

Second question:  How do I fix it?

And so we begin...

Harbor Frieght Infrared Thermometer

This weekend, I bought a Harbor Freight Non-contact Laser Thermometer.  I'll explain why in a later post, but I gotta say, I'm impressed with how well this thing performs... especially since it costs $39.99... and I had a 20% off coupon!  :)

First, the laser doesn't "do" anything--  its actually just a pointer to help aim the thing.  Its really a focused infrared measurement device.  The instructions state that it has an accurate range of about 8 feet.  After that, the accuracy begins to drop, but from my minimal experimentation, the loss of accuracy is negligible.  Truthfully, if its accurate within +/- five degrees, it will be worth the price.

The device uses one nine volt battery, which gets depleted fairly quickly. 
Note:  The instructions do not explain how to install the battery.  The front part of the hand grip (black part) slides down.  At the top of the grip, near the trigger, there are four ridges on each side.  Hold the top of the "gun" in your left hand, squeeze the ridges with the thumb and forefinger of your right hand, and slide the grip away from the top of the gun.
 Here's some backyard fun.  With fire!

Flames: 598 degrees Fahrenheit
  Almost ready...
Coals: 946 degrees Fahrenheit
 Sure beats licking the coals to see if they are hot enough.

Welcome Back

David and Paul both complained, separately, that I had not updated my blog in a while. I didn't realize that it had been six months. Time flies when you're having fun... Not!

You see, most of my posts are based on R&D for work related projects, or humorous (read: absurd) events that happen during the day. Recently, much of my day is tied up with logistical silliness. Nobody wants to read a blog post about delivery guys getting mad at me because I wont accept a server rack, because the inventory team won't give me permission to bring it in the building, because my boss didn't get their approval on the rack before he bought it. Boring.

If I wanted to write boring posts that nobody cared about and would not enrich the lives of humanity, I'd post to Facebook.

Having said that, I've started a few personal projects that might be interesting.

Sunday, January 22, 2012

Mech Hero MMO

Lately I've been wasting allot of time playing a massive multiplayer online (MMO) game called Mech Hero. It's very similar to Star Craft or Warcraft 2100, with shades of Mech Warrior blended along the edges. What I find interesting about this one is that it is completely browser based and is built around an elegant freemium business model.

The basic premise is that you run a city, and have to selectively build the city's infrastructure to produce and support a mech army. You can harvest resources, attack random "non-player" targets, or raid other player's cities. There are bot strategic and tactical aspects of the game.

So here's what I've learned:
* Don't research support vehicles past level 2.
* Don't research armor plating.
* You don't have to research weapons sequentially, so don't invest in assault rifles.
* Until your resources exceed level 9, don't build resource storage silos past level 3. Instead, build bunkers. Bunkers don't hold as much as silos, but they protect resources from pillagers.
* The cost-benefit curve for bunkers goes exponential at level 7 and negative at 11. Once a bunker hits level 5, build another. Once your resources exceed 10, upgrade the bunkers to level 7.
* The cost-benefit curve for resources starts to suffer after level 10 and goes negative around 13 to 14.
* In the case of electrical resources, build to level 11. Upgrading one power plant to level 13 would yield a 17% boost in power for about $110K. Amortized across the four plants, that end up being about 4%. Upgrading one plant to level 12, and installing a power coil, costs the same, but increases by 8%... for all plants, giving a 30% total yield.
* Always use recon satellites in sets of three.
* Build a trading post and transport units early. Weapons can be purchased for far less than the cost to research them.
* When attacking a non-player controlled (NPC) training site, it is almost always defended by a single Raptor with either machine guns or assault rifles. Attack with 2 lasers and 2 cargo carts. The lasers will keep the mech out of gun range, and two carts will carry the full bounty.
* One cargo cart can carry about 2500 units of resources and a harvester can carry about 2000.
* Whenever you send a mech to battle, have harvesters standing by: If you lose, your weapons will be ejected from the battle in a debris field.
* Each weapon has a weight. When harvesters retrieved weapons from a debris field, the weapon's weight is the same as 100 times the resource weight. This means a machine gun with a weight of 10 will take as much space as a 1000 resources.
* Precision Laser Rifle (PLR) is the early weapon of choice. Once you have shields, start thinking about railguns and plasma weapons. Avoid assault rifles and mortors.
* Weapons that are too big to be carried be a harvester (a PLR weighs 25, displaces 2500) can be "dis-assembled" and carried by two harvesters.