Monday, December 26, 2011

TED Stuff to Remember

I've been a big fan of TED for several years, because it offers insight into what "the intellectual elite" consider important. Most of these people wouldn't give me the time of day, but occasionally, a few of them make some good points. I had some notes written on the back of an envelope that I wanted to throw away... ah... I meant recycle... so I figured I better blog them so I could find them later.

On Being Wrong
This presentation has a cute graphic of what it looks like to "realize you are wrong". I didn't care for the speaker, but the take away was what she called the "Unfortunate Assumptions of Wrongness":
There are three reasons someone might think you are wrong:
1. Ignorance- They don't understand the facts, so perhaps you can educate them.
2. Idiocy- You've explained it to them, so they must be too stupid to be to understand.
3. Evil- Maybe they do understand, but are trying to undermine your brilliant plan.
In my case, its always number three.

The Moral Mind
Very politically slanted, but not wrong. The speaker states that there five moral values, of which "Conservatives" acknowledge the importance of all five, but "Liberals" acknowledge only two.
1. Harm/Care - protection
2. Fairness/Reciprocity - don't lie, cheat, steal
3. Ingroup/Loyalty - community, tribalism
4. Authority/Respect - patriotism
5. Purity/Sactity - sexuality
The research seems to indicate that everyone agrees on 1 & 2, but that the divide is the "Conservative" insistence on the importance of 3, 4, and 5.


Self Deception

This presentation has a good explanation of the difference between a false positive and false negative, and how it relates to decision models.
Finding order in chaos which does not exist, is patternicity.
* More patterns are percieved by the left eye.
If the pattern (or model) is wrong, we have made either:
  Type I Error (false positive): believing something that is not real.
  Type II Error (false negative): not believing what is real.
When evaluating the outcome of a decision where the threat could be inanimate versus a predator, we naturally err on the side of the entity. This is called agenticity.
Agenticity is a difficult concept, especially since the speaker wraps it around religion, but the base of it is the belief that others can control chaos that we can't. If you're walking through the jungle and there is a rustle in the grass, it could be the wind or a lioness. If you assume it is the wind, and it is a predator, you get eaten. If you assume it is a predator, and that the predator has heightened senses, is faster and stronger, than you become over-cautious. If it is not a lioness ready to attack, its a "false positive", because you attributed agenticity to a sound without investigation. But you survive! Thus, we are wired for false positives.

Tuesday, December 06, 2011

Merrill *not* Lynch Retirement Calculator

For those of you from outer space, the American economy has been having a hard time recently. One shining example is the paragon of Wall Street, Merrill Lynch, which imploded nicely several years ago, but was too big to fail, so it was "bailed out". It now survives as a subsidiary of Bank of America, who was also too big to fail.

We'll I stumbled upon web based retirement calculator (click the Find Out button to the right) and decided to have some fun. I learned something interesting.

First the rules of the game: The calculator asks questions and determines a magic dollar value that you have to achieve. Four factors drive the calculator:
Current age
Current retirement contributions
Current income
Projected retirement age
Obviously the last variable is stupid, since everyone knows that retirement age is 65.

Given my actual age, a reasonable estimate of the value of my retirement accounts, and a reasonable estimate of my current salary, the calculator yielded a pass/fail rating. Pass was defined as having enough money to maintain my current lifestyle, and fail was defined as running out of money before I died... Which is apparently at age 91.

I had control over two factors, which determined if I passed or failed: First, the amount of money I contribute each month for retirement. Second, my investment style, defined by my degree of risk exposure. What I did not have control over was market performance, so I was presented output based on average market performance and below average market performance, which I guess means Merrill and Bank of America do not expect above average market performance over the next 45 years.

And what I learned was I get a passing grade if I save X dollars a month, using Y investment strategy:
$10,000 Conservative
$9,200  Moderately Conservative
$8,700  Moderate Risk
$8,500  Moderately Aggressive
$8,200  Aggressive

Needless to say, my New Year's resolution will not be to save $10,000 a month. So, I guess I "lose". But here's what I see as the moral of the story: The difference between playing it safe and taking the biggest gambles is less than 20%. If I put in a more realistic monthly contribution of $1,500 per month and act conservatively, I retire at 65 and run out of money at 67. If I'm aggressive, I run out of money at 68.

So, I guess I'll go aggressive. Let's be optimistic! What's the worse thing that could happen. After all, what's the likelihood that Wall Street will screw up again in the next 20 years?

Tuesday, November 08, 2011

Creating a New Project in SVN

It had been over a year since I'd created a new SVN project, so of course I forgot how and had to waste an hour trying to figure it out. Assuming a remote SVN server with SSH and working keys...

On the server:
cd /svnrepos
svnadmin create newproject
The project should now be visible in WebSVN.

On the client:
mkdir -p /tmp/newproject/{branches,tags,trunk}
cd /tmp/newproject
svn import -m "New Project" . \
    svn+ssh://websvn/svnrepos/newproject
Refresh WebSVN, and the three sub directories should be displayed, but we need to "prime the pump" by uploading an active item.
cd trunk;
svn co svn+ssh://websvn/svnrepos/newproject/trunk .
touch dummy.txt; svn add *
A         dummy.txt
svn ci . -m "First"
Click the trunk link in WebSVN, and the new file should be visible and the project should be active. Unfortunately, its in the wrong place... this is in /tmp.

Wipe out the "prime" directory:
rm -rf /tmp/newproject
Move to the "real" location and checkout the new project:
cd /some/path
svn co svn+ssh://websvn/svnrepos/newproject/trunk .
svn del dummy.txt; svn add *; svn status
Populate the directory with the project files. The next check-in should remove the dummy.txt file and sync with the server.

Sunday, October 16, 2011

Potty Humor

In a gents, somewhere in the UK, a sink manufactured by Thomas Crapper & Company.
Uh huh huh huh-- He said crapper.

Recyclable Turkey Gravy

..and its delicious and wholesome, too.

Tuesday, September 27, 2011

Good-bye Gnome

In the latest machine re-org, I reloaded my workstation with Fedora 15. I don't like it. I think they've made several wrong turns, the single biggest being their implementation of Gnome 3.

With Gnome 3, they have set a minimum acceptable video hardware platform. That's fine, because they provided a fallback mode in case the machine does not support 3D rendering. Accept for one minor problem... most of the desktop's features do not work in fallback mode.

For instance... The ability to exit the GUI. Yes: there is no logout, quit, exit, stop, leave, or otherwise feature. The only way out is to open a terminal and init 3. If they missed something as important as an exit function, just image what else they missed. (Hint: allot!)

So, I'm trying to switch to KDE and it looks promising. As a matter of fact, I have found a fix for my single biggest complaint with KDE. I am so use to highlighting text in a terminal window and pressing "Shift-Insert", that for five years, I have refused to use KDE because it required the extra step of "Ctrl-Insert".

On the taskbar, near the clock is a scissors icon. Left click, select Configure Klipper, and click "Synchronize contents of the clipboard and the selection". Now it works the way I want!

Monday, September 12, 2011

VM Autostart on XenServer

Its a good thing I don't need to run VMware, because they are so dependent on a pre-existing Microsoft infrastructure, that I couldn't run it, even if I tried. And I've tried. Of course, I should just go ahead and invest the $3,000 in Microsoft software... just so I can invest a $1,000 in VMware software. Or, I could use Citrix XenServer.

Unfortunately, Citrix is doing everything in their power to ruin their entry level product, based on the philosophy that if they strip enough useful features from their product, eventually people will have no choice but to buy it. I don't know... If I'm going to throw two grand at them, I might as well up the ante and buy VMware.

Or just hack their product. I mean, come on guys... are you even trying?

So, I've got a cluster of XenServers, and I want to start a VM with the host boots. Ah! Upgrade XenServer to enable! Or add the following to the /etc/rc.d/rc.local for all hosts:
xe vm-start name-lable=YourVmName
If the host is the first to boot, it starts the VM. If the host is not the first, it attempts the command, gets a failure message (becuase the VM is in the wrong power state... on), and boots as normal.

And now we give thanks to the command line Gods.

Wednesday, September 07, 2011

Linux Rescue for VM on XenServer

I finally figured out how to rescue a Linux VM running on a Citrix XenServer host, an believe it or not, it is completely unintuitive! First, power off the VM, though if you're going to rescue mode... you're probably "down" already. Second, mount the rescue media (CD/DVD). Third, make sure the VM is highlighted on the left pane of XenCenter.

Here's the trick: Across the top of the XenCenter menu, select VM, and Start/Shut down, and Start in Recovery Mode. The machine should boot from the rescue media. Proceed per rescue SOP.

Be forewarned... For some reason, booting is incredibly slow.

Monday, September 05, 2011

SSH Tunneliung of X11 Apps

I had occasion to finally test something I've been wondering for a while: What is the minimal configuration to allow and X11 application to tunnel through SSH? First, this procedure assumes you have a workstation that can display X11 applications. This can either be a Linux desktop or a Windows machine running Xming or another lesser X client.

Second, install a system with only the core or base packages, possibly by building the system through kickstart. Third, edit the /etc/ssh/sshd_config and make sure X11Forwarding is set to yes. Reload if needed.

Next install a simple graphical application; for my test I used xclock:
yum install -y xorg-x11-apps
Attempt to run:
ssh 192.168.1.1 -X xclock
Error: Can't open display
In this case a failure is what we expect.

Conventional wisdom says we need to install the entire "X11 Window System" group, which will grab almost 100 packages. Instead install one RPM:
yum install -y xorg-x11-xauth
ssh 192.168.1.1 -X xclock

Warning: <snip> (repeated several times)
...but behold! A glorious xclock. The errors are from not loading fonts on the remote machine. Oddly, if we install xterm, it also complains, yet it works just fine.

A side note on this procedure: To prevent from issuing the -X (or -Y) with the SSH command line, change /etc/ssh/ssh_config, adding:
ForwardX11 yes

Adding gedit will require 57 more packages.
Adding kedit will require 68 more packages.
Best choice: gvim, requiring three packages.
yum install -y xorg-x11-fonts-Type1 xorg-x11-fonts-misc
yum install -y gvim

Saturday, September 03, 2011

Rubus Cabernet Sauvignon

From the back label: "flavors of blackberry, currant, and chocolate"... unfortunately, I'm allergic to blackberry and currant. It took several years of research, but my very unscientific explanation is that blackberries, grapes, and blueberries are all related. They all grow on vines, but grapes and blueberries are smooth, but blackberries (and raspberries) are "bumpy". I'm allergic to bumpy berries.

In the universal order of things, some grapes are closer to blackberries and some are closer to blueberries. Currants are smooth, but fall right between the blackberries and grapes. As such, any wine that describes itself as tasting of currants and blackberries will cause my throat to swell, and I'll choke and cough. Cabernet Sauvignon is such a wine.

So why would I buy a bottle of wine with the potential of choking me to death? I didn't, my son the chef bought this, not knowing the potential side effects. Of course I drank it anyway.

Once you discount the likelihood of a slow an torturous demise, he actually did a good job. He picked an older wine, a 2007, specifically claiming to be "old vine" grapes. The older the vines, the deeper the root system, and the more stable the flavor. The wine had good body and flavor, but I can only give it a 4 out of 10, on the grounds that I wouldn't chance it again.

Sunday, August 28, 2011

Unlocking Citrix Xensever Memory

Wow... Has it been that long since I posted? Yeah, its been a wild few weeks, what with hurricanes, earthquakes, hail storms, trying to sell a piece of underwater real-estate at a 30% loss. Whew! But hey, here's a Xenserver hack for you:

I loaded Xenserver 6 Beta on a cluster of servers, and was disappointed to find that they had moved memory management out of the "free" product and into one of the "pay" tiers. This means that if you want to change an VMs memory reallocation, you have to pay an extra licensing fee. Silly. Especially since it was so easy to bypass.

Create a template from a VM. Log into the Xenserver console via SSH as root. Using the template name that appears in XenCenter:
# xe template-list name-label=a-Windows_Vista_x86-x2
uuid ( RO) : 4c<snip>28
name-label ( RW): a-Windows_Vista_x86-x2
name-description ( RW): SP2, Registered
What we need is the UID. View the template parameters:
# xe template-param-list uuid=4c<snip>28
Here, look for the min/max lines:
memory-static-max ( RW): 1073741824
memory-dynamic-max ( RW): 1073741824
memory-dynamic-min ( RW): 1073741824
memory-static-min ( RW): 1073741824
I this case, the template will create a VM can only be 1G... never more... never less.

Lets change the bottom value:
# xe template-param-set uuid=4c<snip>28 memory-static-min=1
# xe template-param-list uuid=4c<snip>28 | grep " mem.*-m"
memory-static-max ( RW): 1073741824
memory-dynamic-max ( RW): 1073741824
memory-dynamic-min ( RW): 1073741824
memory-static-min ( RW): 1
I haven't figured out the other three values, as any attempt to change them throws an error saying they must all be equal. They do form the top limit, but once you strip out all that Aeroglass crap and disable half the services, Vista runs just fine at less than 512Mb. But just for the record, I wouldn't suggest you try to run Vista on 1 byte of memory.

Tuesday, August 02, 2011

Changing Linux/Unix "ls" Time Format

I though I had documented this, but had to look it up again today. To change the format of the the ls command's time, use:
ls -l --time-style=+%s
...where the format is the same as those listed for the date command. See man date. I like +%s, because it allows scripts to calculate "elapsed time" between two file modifications.

Another useful option --time to display atime or ctime rather than mtime.

Wednesday, June 29, 2011

ITIL Expert, I Has it

It was a long, hard, painful march of sorrow and circumstance, but I finally made it: I am a certified ITIL Expert. Having succeeded on my voyage of knowledge and betterment, I am totally convinced that 99% of what IT managers think is the right, is wrong. It least 50% of those wrong things could be corrected by adapting 30% of ITIL.

But... Adapting 100% ITIL is not the answer. Want to know where I learned that? Its in the in ITIL manuals.

Would I suggest the Global Knowledge classes? I just don't think so. They are not designed for the average guy: they are designed for PMP and Six Sigma people, for people that are significantly invested in the soft side of IT.

As it turns out, I passed by two points, because of their lop-sided grading. When you consider that most of the Expert phase was based on Service Strategy and Continual Service Improvement, and those were my lower scores, I guess I was right were I should have been.
SS   5 3 3 3 5 1 5 5 = 30
SD   5 5 5 5 1 5 5 3 = 34
ST   1 5 5 3 5 5 0 5 = 32
SO   5 5 5 1 1 5 5 5 = 32
CSI  5 5 5 5 3 0 3 5 = 31
MAL 5 5 1 5 3 0 5 5 = 29
Would I do it all over again? Ask me in five years.

Sunday, June 19, 2011

"Falling Skies" Hits Bottom Fast

Luckily, it only took one night to realize that this one was a complete waste of time, unlike "The Event", which required two nights. Thankfully, "Falling Skies" ran episode one and two, back-to-back on one night, which saved the trouble of spending the week thinking "maybe it will get better." Where did it go wrong? Glad you asked.

In the movie "Signs", a family finds themselves in the midst of an alien invasion. In the series "Lost", a group of plane crash survivors find themselves on a magical island. At the end of "Signs", we don't know why the aliens invaded Earth. At the "Lost", we don't know why the island is magical. In both cases, the story worked, because the mystery was just below the context of the story.

In "Falling Skies", 99% of the Earth's population has been wiped out in seven months. The greater Boston area, the total head count is about 3,000. The surviving human's commander decides to split into groups of 300 people, 100 fighters, and 200 civilians. And so, off they go, in broad daylight, walking down the street, driving cars, trucks, and motorcycles, in full sight of the alien mothership.

How is this possible? Its a mystery! Yeah that's it... a mystery. It makes you want to watch more!

Or maybe its that the aliens are nocturnal: they are incapable of operating in Earth's harsh sunlight. Makes sense: They can build intergalactic space craft, robotic combat units, but can't build the opposite of a starlight scope. Wow, its too bad the US military didn't survive long enough to realize that the aliens were most vulnerable, while humans were at their strongest.

But after seven months, we've learned that we can road march a retreat with complete impunity, since the aliens are helpless in the daylight. So when do we attack the food warehouse? At night. Yep. Every combat sequence was a night.

As much as would like to spend hours and hours and hours going on about all the completely ignorant things that were wrong with this show, I think I'll stop here. Its sad enough I wasted this much time on this.

Monday, May 30, 2011

Fantasmic Nouveau Retro Future Robot Art

Check out the web site of really cool works from artist Nemo Gould.

Sunday, May 22, 2011

Andre California Champagne

I didn't want it, but got it anyway. I'd had it a zillion years ago, and knew what to expect. This is a wine you buy for the storybook wedding where you are judging the quality of the event by the number of people you invite.

I only invited Ronald and Nancy Reagan.
    They politely declined the invitation.
        Something about being busy being President, at the time.

So, if you have to buy 24 bottles of sparkling wine for people you don't really care about, buy this. Otherwise, invite 1/4 the number of people, and spend four times as much on real Champagne or good Presecco.

3 of 10

Gerd Anselmann Dornfelder

Every time I go wine shopping, look at the German reds. Unfortunately, the reds the German's send us, are far lacking what they keep for themselves. As a result, when people think of German wines, they think white (Riesling), not red. The reason: climate. The colder the climate, the more difficult it is to grow red then white.

I've suffered many bad German imports-- most only slightly less disgusting than taking a can of frozen grape juice, adding a cup of corn syrup, and diluting with a cup of pure grain alcohol. And even though Spatburgunder is Pinot Noir, most Californians are better.

Dornfelder is a unique varietal, native to Germany. Again, the climate makes it sweet, but this one was only mildly so. The flavor is similar to a Lambrusco, but the stand out for this wine was the body and color: Wow!

I'll be stocking this one, especially for the price of about $15. A firm 8 of 10.

Sunday, May 01, 2011

RHEL 6 Virtualization, Memory

There are two memory settings presented in virt-manager: Allocation and Maximum. The Allocation setting is what will appear in /proc/meminfo and top. The Maximum is what can be used to boot the VM. Interestingly, the kernel will refuse to boot with out enough RAM, but once booted, will run with significantly less.

The Maximum value needs to be more than 348MB. Any lower, and boot time is noticeably slower due to swap activity.

The Allocation value needs to be more than 148MB. The VM won't crash until the allocation is about 115MB, but there are several factors that could effect that number. Obviously, 148MB may not meet a every VMs needs, but it seems to be the lowest reasonable limit.

Keep in mind, the Maximum is always allocated to the VM at boot time, and them lowers to the Allocations, so don't set it too high. The Allocation can be dynamically changed for a running VM, but cannot exceed the Maximum. Any changes to the Maximum require a reboot.

RHEL 6 Virtualization, Paravirtualization

Paravirtualization is a big deal. It is avoided by VMware, is alchemy in Citrix Xen, and is cryptically alluded to in RHEL 6. Yet, for those of us that are almost exclusively Linux, the performance and density advantages are huge. Even Windows XP performance is noticeably improved running "para-virt". (As for Vista and Win7... they're both hogs, no matter what.)

In ten words or less, paravirtualization improves performance by loading a version of the operating system optimized for the host's hypervisor.

In the RHEL 6 Virtualization guide Chapter 8, they state that para-virt does not work with KVM. This would imply that there is no way to optimize a RHEL 6 VM on the RHEL 6 platform. Given that, why not just run VMware?

Yet in Chapter 11, they mention that para-virt drivers are automatically loaded and installed for RHEL 6 VMs and Linux VMs based on the 2.6.27 or newer kernel.

So... Which is it... Para-virt yes or para-virt no?

Survey says: Kernel no, drivers yes. But, of course, there's a catch.

Once the VM is installed and running, execute an lsmod | grep virtio. Look at the last line. The items at the right of the numbers will indicate which para-virt drivers are used. You want four of them, but may only have three. Depending on how the VM accesses the outside network, the virtio_net driver may be missing.

To enable virtio_net, a specific sequence of events must be followed:
1. Power off the VM
2. From virt-manager, Open the VM
3. Select View and Details
4. Select NIC
5. Change Device model to "vrtio"
6. Apply, exit, and Run the VM
Upon boot, the virtio_net driver should be listed.

Call it a bug, but if you try this with the VM powered on, it will claim to work, but will not.

As for performance without a para-virt kernel, I am still a little skeptical.

The President's Speech

Clever:

Tuesday, April 26, 2011

RHEL 6 Virtualization - Bridged Interface

With version 6, Red Hat has finally fixed the long standing problem of not being able to use the GUI to configure a Virtual Machine shared connection. Lets first review the types of VM network connections:
* RH=y  CX=y  VW=y Internal
* RH=y  CX=y  VW=y Dedicated or Slaved NIC
* RH=y  CX=y  VW=y Routed or VLAN'd
* RH=y  CX=y  VM=y Shared or Bridged
* RH=y  CX=n  VW=n Network Address Translation (NAT)
(Bold entries are the "default" config.)
An internal connection does not route traffic off the host system. A dedicated (also called slaved) connection, requires a separate NIC for each VM, which is very inefficient. A connection that is routed or VLAN'd requires the network be aware of the the specialized configuration. A shared or bridged connection (what we're after) extends the real world subnets in to the virtual machines. The last type, NAT, allows the VM's to communicate out, but does not permit inbound request, thus rendering it useless to servers.

Unfortunately, Red Hat uses NAT, by default, and their virtualization technology is principally used for server consolidation. As if this is a good idea. To make matters worse, their documentation still suggests manually configuring a shared connection, and does not explain that once you're done, you won't be able to see the connection in the GUI.

Here's how to do it right:

From virt-manager, connect to the host, click Edit, and select Host Details. On the Network Interfaces tab, click the plus sign (+) at the lower left. In the pop-up, select "Bridge" and Forward. Assign a name-- I recommend br followed by the eth number of the card you are sharing. In other words, if you are sharing eth1, name it br1.

On the same screen, set the start mode to "onboot", check "Activate now", and check the target NIC that you want the VMs to access. Take a deep breath, hold it, and click Finish. Scary things will happen, but after about a minute, the window should respond.

Notice that the eth item has disappeared from th elist and been replaced by the newly defined bridge. Now click the Virtual Networks tab, and notice nothing has changed. Why doesn't the new connection appear in the list? More evidence that Red Hat's interface is the least intuitive of all the vendors. This tab is a list of virtual networks, and a bridged connection is an extension of the physical network. (Yeah, while technically correct, it doesn't make sense to me either.)

When provisioning VMs, make sure to select expand Advanced options and choose the br device.

*** NOTE ***
An excellent discussion of the underlying technology is available on Dale Bewley's blog. You'll need this for kickstarting hosts.

Sunday, April 24, 2011

RHEL 6 Virtualization As Non-root User

Red Hat has always been overconfident about the use of the root account over SSH. By default they allow direct logon to the root account via SSH, because the first "S" stands for Secure. But that's not the point... You should never let anyone logon directly as root. Always access as a user, and escalate privileges.

With RHEL 6 virtualization, I ran into a problem with virt-manager, in that it refused access from a non-root account. If I logged in as myself, the app would start, but not allow connections, complaining:
Unable to open a connection to the libvirt management daemon.
Verify that:
- The 'libvirtd' daemon has been started
Further digging (clicking "Details") shows a couple Permission denied messages. Using sudo doesn't help, switching to root doesn't help, but SSH'ing in as root works. So, what's the fix?

Turns out there is security group that allows the identification of trusted users. Edit the /etc/libvirt/libvirtd.conf file and uncomment three lines:
unix_sock_group = "libvirt"
unix_sock_rw_perms = "0770"
auth_unix_rw = "none"
The third line bypasses the "polkit" security mechanism... since it doesn't work, anyway.

The natural reaction would now be to issue a service libvirtd reload command, but not so fast... that won't work. (But feel free to try it if you like-- I'll wait.)

List the /var/run/libvirt dir. You should see a libvirt-sock file owned by root.root with permissions of 700. Use service libvirtd stop, and file will disappear. Issue service libvirtd start and it should fail to start, claiming it can not load the configuration file, which is not true. The problem is that we told it to allow access to members of group "libvirt", which does not exist. Create it, add yourself:
groupadd -g 170 libvirt; usermod -a -G libvirt doug
Logoff, login, and verify with the id command.

Issue the start command again, and the service should start. List the /var/run/libvirt dir. This time, the file is owned by root.libvirt with permissions of 770. We can now access and control the server as a non-root user.

Thursday, April 21, 2011

RHEL 6 - Network Gotcha

Earlier, I had reported problems with my Atheros integrated NIC on my new mainboard. I finished the build on my RHEL 6 cloud node by adding an Intel Pro100 I had lying around as eth1. As expected, Anaconda recognized the card on install. To my surprise, the card was not active on first boot. But then the real surprise... I concurrently installed a RHEL 6 VM, and its virtual 8139 was also not energized on boot.

Something wicked is afoot, and it smells putridously similar to:
NetworkManager
When we cat ifcfg-eth0, by default, the file includes DEVICE, HWADDR, ONBOOT equal "no" (the symptom), and NM_CONTROLLED equal "yes" (the disease). I've hated NetworkManger ever since it showed up on RHEL 5. (It uses that ridiculous mixed cap, no space, spelling because the Red Hat developer who "innovated" this solution wishes he worked at Apple instead, and that's the way Apple processes are named.) My traditional solution was to chkconfig NetworkManger off, hardcode the network params, and life was good.

There is a perplexing part to this story, however, that forces more review. On RHEL 6, if you select @Base, @Minimal, or @Server-Platform, NetworkManager is not installed. That's fine with me, but the cards are tagged as being controlled by a service that doesn't exist. So, I approached this from the point of view that just as we are going to have to learn to use postfix instead of sendmail, know how to configure static networking with NetworkManager. Preferably, without the GUI.

So, I installed a slew of RPMs from DVD to get the NetworkManager service running. I read man pages, researched in the web, played with the tools, and found the solution. The solution is to hard code the static IP in a script under /etc/NetworkManager/dispatcher.d. But wait a second... If we have to hard code the data in a script, why not put it in /etc/sysconfig/network-scripts as we have been for over ten years. Good idea: Let's uninstall all this piece of crap.

The workarounds:
* Kickstart the server with a static address
* Set the address using firstboot
* Manually edit the interface files
* Use system-config-network-tui to set the address. (Don't try to use the TUI to set it to DHCP, it won't work)

Sunday, April 17, 2011

MSI 880GM MainBoard... Oops: E41

I bought a box of computer guts from Newegg, to upgrade my Linux virtualization host. This box has not been doing anything recently, because it was still running Fedora 8. I know F8 is soooo old, but the machine is only a PIII with 512Mb. Since it didn't have VMX or SVM, old school Xen was all it could run.

That board was an Asus, and it had served me well, as have several other Asus boards. My Citrix XenServer machine is running on a Foxconn mainboard and my VMware ESX is running on HP board (because VMware will not lower themselves to run on anything but name brand hardware.) This time around I selected an MSI 880GM-E41.

My plan was to run RHEL 6 for a few weeks to look at the changes to their virtualization stack, then move to either SL60, Citrix XenServer, or VMware. To determine my options, I attempted an install of each. Of course VMware threw up all over the box, but I expected that.

What happened next scared me: Citrix XenServer refused to load, since the machine had no network port. No need to panic. I tried Red Hat 6, which installed, but didn't recognize the onboard port.

Uh oh... Bad board? That means pulling out all the guts and spending more money to send the thing in for a replacement. Maybe I should check the Google Interwebs to see if this is a known issue.

And it is. The 880GM-E41 has an Atheros AR8131M, which is not "fully" supported by Linux. If I had gone $5 more to the 880GM-E43, I would have gotten a Realtek 8111DL, but I didn't check the drivers first. The really fun part is that if I'd gone $5 less to the 760GM-E51, I would have only lost the ability to overclock the memory. For some reason Newegg's site only displays the 760GM-E51 when you specifically search for MSI boards. (It must be a special order.)
http://www.blogger.com/img/blank.gif
So what do we do? Turns out, Red Hat's Anaconda installer does not recognize the Atheros NIC, but the Red Hat kernel does. Once the install was finished, I was able to manually configure the interface. At the first boot, udev had seen the card, and installed the driver, but was not able to active it without an ifcfg-eth0 file.

I've got a couple hours left, tonight. I'll either try to hack Citrix Xenserver to recognize the NIC, or mess around on I Can Has Cheeseburger.

Monday, April 11, 2011

Scientific Linux (SL-6.0) Pt 2

For the record, SL-60 will not boot with 128Mb of memory, it reports "Out of memory". Works with 196Mb. I generally run my VMs on the ragged edge, so I may try to chew that a little lower.

*** Update ***
Nope, 196Mb is as low as I'm willing to try to go.

Sunday, April 10, 2011

Scientific Linux

With the introduction of Red Hat Enterprise Linux 6, I've had to renew my certification. A couple Friday's ago, I got my RHCSA, and in a couple more weeks I do the RHCE. In preparing for the test, I've come across a distribution called Scientific Linux

Scientific Linux is a recompiled version of RHEL, similar in concept to CentOS, with one very important difference: CentOS sucks and is for posers. Scientific Linux, or SL, however, is the output of Fermilab and CERN. Where as CentOS is a disorganized, bickering bunch of dot.com wannabee's, who have created a product for people that are too cheap to run Red Hat, but too scared to run Fedora, SL is put together by people who need raw computing horsepower, and lots of it.

I've played with it, and thus far, am impressed with what they've done. Here are some observations:
* I was a little surprised when I used the boot.iso to launch a installed only to find that it automatically does a minimal install without asking.
* It defaults to postfix instead of sendmail, but I guess its about time to let sendmail die.
* At install time, you can select several well respected third party repos.
* They've got a lean, mean, minimal install.
* The GUI installer would not run in 640Mb of RAM, but would at 768Mb. (But that might be a RH thing.) Once it was installed, I dropped the memory down to 256Mb and everything run spiffily.
* They've got the install DVD's laid out wrong-- You can't do a base install using only DVD1. For me, that meant I had to actually extract the DVD's to create an intall repo rather than mount just DVD1.

I'm running this in a VM under Citrix XenServer and am having some problems with the vmtools utility, but that does not surprise me. Once I've got the base VM tweaked, my first project will be an OpenLDAP server, followed by a Kerberos.

Saturday, April 09, 2011

Wente Sauvignon Blanc

And what do we all know about sauvignon blanc? That's right, it grows in New Zealand. This one is from California.

I actually looked for this wine, since one of the wine magazines pointed out that I shouldn't be snobby about CA wine, as long as the CA wine has a 90 point score. My wine shops don't usually carry 90 pointers, but I keep looking anyway. I could try going to swankier stores, but we both know I'm not going to pay the swanky price.

When I found this San Francisco Bay wine for about $17, I grabbed it as if I'd found a real bargain. One taste, and I knew I could have bought a Chilean white and saved myself $10. Way too much vinegar and citric acid. Why, you could almost taste the exhaust from the 101.

Scary to say this, but I'd rate this 90 point wine a 4 out of 10.

Casa Santorsola Barbera

It is very unusual to find a Piedmont Barbera for under $25. To my astonishment, this one was $7. Since these northern Italian reds are usually out of my price range, I don't have much to compare it to; so, I'll compare it to other $7 reds.

Bueno! It was not too heavy, a little dry. The Barbera is a close cousin to the Sangiovese of Chianti fame, but did not have the same taste. Oddly, I'd compare it more to a Pinot than a Chianti.

Definitely a stron 6 out of 10, maybe even a 7, because of the price.

Wednesday, March 30, 2011

Tomcat Native Libraries

Again, I have been wrangled into solving the secrets of the universe. To make matters worse, I'd already solved this problem once. Now I get to solve it again.

The problem: When Tomcat starts, it writes to the logs...
The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path
What! I'm not getting optimal performance! Unsatisfactory. Hey, Doug! We want optimal performance... go get us some of that there Tomcat Native. And can you get us this optimal performance by lunch tomorrow?

Yeah.

First things first: Check out Apache's Native library for Tomcat documentation. Next, forget about that, its worthless. Okay, now Google it, and read the dozens of posts about compiling Tomcat Native on Debian and Ubuntu. Next, forget about that, its worthless. Remember this is production... and that means you grow up and run Red Hat.

Assuming you have Tomcat up and running, change to the Tomcat root directory, and look in bin. You should see a tomcat-native.tar.gz file. This contains one third of the source code you will need. Sounds like we need to find the rest:
rpm -qa apr-devel openssl-devel
apr-devel-1.3.9-3.fc13.x86_64
openssl-devel-1.0.0d-1.fc13.x86_64
Obviously, your versions may differ. Install if missing. Extract the tomcat-native and decent into the build dir:
tar -xzf tomcat-native.tar.gz
cd tomcat-native-*-src/jni/native
Per Unix lore, we ought to be able to run configure, make, and make install. In this case it fails with the error message "Please use the --with-apr option." After significant aggravation, I found the correct syntax for this option, as well as the SSL syntax:
./configure --with-apr=/usr/bin/apr-1-config \
 --with-ssl=/usr/lib64/openssl
Now I get:
error: can't locate a valid JDK location
I got this one:
ls -l $(which java)
<snip> /usr/bin/java -> /etc/alternatives/java
ls -l /etc/alternatives/java
<snip> /etc/alternatives/java -> /usr/lib/jvm/jre-1.6.0-openjdk.x86_64/bin/java
ls /usr/lib/jvm/
java
java-1.5.0-gcj-1.5.0.0
java-1.6.0
java-1.6.0-openjdk-1.6.0.0.x86_64
java-1.6.0-openjdk.x86_64
java-openjdk
<snip>
Since its missing the JDK, its got to be one of the openjdk items. Turns out, two are links, so we can use the shortest. Dig around to figure out the syntax to pass into the compile:
./configure --with-apr=/usr/bin/apr-1-config \
 --with-ssl=/usr/lib64/openssl \
 --with-java-home=/usr/lib/jvm/java-openjdk
Victory! Run make. Now make install. Wait! Not so fast. When you do this, its going to install something, somewhere... Right? Consider this: As long as this is a single purpose VM, go ahead and pull the trigger, but if you are running more than one Tomcat instance, which is probably the case with a physical server, running make install could cause problems. Instead, lets localize this library to the Tomcat that gave us the original TAR file. Back up a few steps and add a prefix option:
make clean
./configure --with-apr=/usr/bin/apr-1-config \
 --with-ssl=/usr/lib64/openssl \
 --with-java-home=/usr/lib/jvm/java-openjdk \
 --prefix=/opt/apache-tomcat-7.0.11
make; make install
Chamge dirs over to Tomcat's root. You'll notice an include dir, with nothing in it-- is safe to delete. Look in you lib dir and you will see several libtcnative files. These are the fruits of your labors. (Well, my labors, actually.) To invoke these files:
export LD_LIBRARY_PATH=/opt/apache-tomcat-7.0.11/lib
bin/startup.sh
The logs now show:
Loaded APR based Apache Tomcat Native library 1.1.20.
Hurray!

And the optimal performance? Don't worry... its there: Trust me. Benchmark? Performance metrics? Don't worry about that stuff. Just believe. Feel the love. Be one with the universe.

Yeah.

Sunday, February 27, 2011

Mysterious Linux Permission Dots

Sometime mid last year, something strange happened: a dot appeared in the permission string of the Fedora distributions. I ask around, and no one knew where they came from, or what they were. Now I know, but first, let me show you what I'm talking about:
$ touch test.txt
$ ls -l test.txt
-rw-rw-r--. 1 doug doug 0 Feb 27 20:21 test.txt
It's hard to see, but its the eleventh character in the permission string:
1        Type of object, ex: d for dir or l for link
2-4     Permissions for owner
5-7     Permissions for group
8-10   Permissions for others
11      Mystery dot
Originally, I though this was an ext4 thing, but when I mount an ext3 under Fedora 13, I still get the dot.

Turns out, the mkfs commands have been modified to set ACLs on by default, and the dot is a place holder to represent an empty ACL. Previously, an empty ACL was not represented, as all ACLs are empty by default. Try this:
$ sudo tune2fs -l /dev/sda6 | grep options
Default mount options: user_xattr acl
$ setfacl -m u:apache:r test.txt
$ ls -l test.txt
-rw-rw-r--+ 1 doug doug 0 Feb 27 20:21 test.txt
By executing a setfacl command (as in set file ACL... ACL is pronounced like "ack ull") we change the dot to a plus, which tells us the ACL is no longer empty.
$ getfacl test.txt
# file: test.txt
# owner: doug
# group: doug
user::rw-
user:apache:r--
group::rw-
mask::rw-
other::r--
If we blank the ACL, the plus is gone, and the dot is back:
$ setfacl -b test.txt
$ ls -l test.txt
-rw-rw-r--. 1 doug doug 0 Feb 27 20:21 test.txt

Thursday, February 24, 2011

Authenticated ESMTP over SSL with Sendmail

I've been wanting to get this running, but kept running into the same problem. Imagine my chagrin to find the problem was something simple. At least it's simple once you understand how it's suppose to work.

First the basics, we open a port on the firewall, and configure Sendmail. Check out Denie's Blog for an explanation of the steps. Assuming you've got a running Sendmail / Dovecot server, here's the quick and dirty:
yum install -y cyrus-sasl
chkconfig saslauthd on
service saslauthd start
echo "mydomain.xyz RELAY" >> /etc/mail/access
All authentication must always be encrypted. Setup SSL by piggybacking on Dovecot:
mkdir -p /etc/pki/sendmail/{certs,private}
cd /etc/pki/sendmail
ln -s ../../dovecot/certs/dovecot.pem certs/sendmail.pem
ln -s ../../dovecot/private/dovecot.pem private/sendmail.pem
And changes to /etc/mail/sendmail.mc
dnl Authenticated send from mobile dnl
DAEMON_OPTIONS(`Port=1234, Name=MTA, M=Ea')dnl
dnl No anonymous logins (y) dnl
define(`confAUTH_OPTIONS', `A y')dnl
TRUST_AUTH_MECH(`LOGIN')dnl
define(`confAUTH_MECHANISMS', `LOGIN')dnl
define(`confCACERT_PATH',`/etc/pki/tls/certs/')dnl
define(`confCACERT',`/etc/pki/tls/certs/ca-bundle.crt')dnl
define(`confSERVER_CERT',`/etc/pki/sendmail/certs/sendmail.pem')dnl
define(`confSERVER_KEY',`/etc/pki/sendmail/private/sendmail.pem')dnl
Last step, restart Sendmail, test, and it was the test that I was messing up on. It always failed. To test, use telnet:
$ telnet 1.2.3.4 1234
Trying 1.2.3.4...
Connected
Escape character is '^]'.
220 ESMTP Sendmail 8.14.4/8.14.4; Thu, 24 Feb 2011 12:57:44 -0500
ehlo localhost
250-Hello pleased to meet you
250-ENHANCEDSTATUSCODES
250-PIPELINING
250-8BITMIME
250-SIZE
250-DSN
250-AUTH LOGIN
250-STARTTLS
250-DELIVERBY
250 HELP
First, we're looking for AUTH LOGIN. We need to send our username and password... but we have to send it using BASE64 encoding. This is security through obscurity, in the respect that anyone can crack BASE64. To change our clear text to the expected format, we use something like Ostermiller's Javascript utility. The next trick is that your username is the sending email address which has to be in the format of:
      unixname @ mydomain.xyz
...where the domain is the one we added for RELAY to /etc/mail/access. In the open telnet session:
AUTH LOGIN
334 VXNlcm5hbWU6
ZG9zz0BzzW5zzXIzzXM=
334 UGFzc3dvcmQ6
bWzzbSzzNQ==
235 2.0.0 OK Authenticated
We're in. Make sure to add the login name in the client in the same A@B.C formal.

We need to make a last change to force the rejection of clear text passwords. Back to the /etc/mail/sendmail.mc
dnl Only authenticate across SSL (p) dnl
define(`confAUTH_OPTIONS', `A p y')dnl
Add the "p" option, and restart Sendmail.

Fratelli Pinot Grigio

I worried over this one, because it was at the low end of my budget: which is dangerously low. Its not hard to find a bad wine for under $7, but for the price, this was a nice wine. A respectable wine in league with Bella Sera.

If a beginner needed an inexpensive wine for a fundtion, this would be a good choice. I'm inclined to recommend Bella Sera, but the label on this bottle looks classier... and some times that counts. An easy 5 out of 10.

Botter Verduzzo Prosecco

You know my wine motto: I don't drink blends. Well, this one was a Prosecco, so it doesn't count. And I can justify that...

Champagnes and spakling wines very seldom mention their veritals or vintages, so they are almost always a blend of something. For most sparklings, I complain about the over abundace of bubbles. But this one, was so different, truly! What was amazing to me was how this one "tasted" bubbly without being bubbly.

I really want to give this one an 8 of 10, but I think that the second bottle will be a 7.

Monday, February 21, 2011

MySQL Replication - Pt 3

And here's the stunnel config. The trick is that it has to be both ways, which is to say master has to be able to query slave and slave has to be able to query master. Thus, the same /etc/stunnel/stunnel.conf on both machines except for one value:
# logging
debug=4
output=/opt/stunnel/server.log
# setup
pid=/opt/stunnel/server.pid
foreground=no
setuid=nobody
setgid=nobody

[repliserver]
accept=3308
connect=127.0.0.1:3306
client=no
# ssl
cert=/etc/stunnel/server.pem
CAfile=/etc/stunnel/server.ca
verify=2

[repliclient]
accept=127.0.0.1:3307
connect=other_server:3308
client=yes
# ssl
cert=/etc/stunnel/server.pem
Remember, other_server is the only different on the two machines. Each point to the other. Even though the SSL certs are named the same, they are unique for each. The port listed in CHANGE TO MASTER will be the client accept (3307 in this example.)

To test, from both machines issue:
mysql -h 127.0.0.1 --port 3307 -e "SHOW DATABASES;"
Add user names and passwords as needed.

One last note: The ports (3307 in this example) can be anything, but absolutely must be the same port number on master and all slaves.

Sunday, February 20, 2011

MySQL Replication - Pt 2

Through the wonders of technology, I've already been advised that my proof of concept is full of crap because:
* Replication does not provide automated fail over.
* After a fail over, there is no way to sync back to master.

Bullet one is true, but it is an excuse. Without replication, a catastrophic failure means getting a backup server online with data as old as my last backup or rsync. (And remember: rsync'ing MySQL is notoriously unreliable.) With replication, I'm within a few transactions. And don't bother me with your silliness about moving the RAID disks to a spare machine you had laying around.

Bullet two is only moderately true. The simple answer is to switch slave to read only, dump the database, transfer it to repaired master, and restore. Now we're synced, so we stop slave, and restore the original relationship. Of course, there is no way of knowing how long we'll be in read only.

As it turns out, we can make a few changes to deal with bullet two. As described in 5.0 16.3.6 we swap the relationship, allow the repaired system to sync as a slave. Once the repaired system has ingested the writes, we restore the relationship. Planned outage should be minutes.

On slave, add to /etc/my.cnf:
log-bin=mysql-bin
On master, add to /etc/my.cnf:
report-host=sync_back
Effectively, we have added the ability for the slave to be master and master to be slave. If we add these at build time, the slave does not needed to be restarted at time of failure to become a master. Remember to add a replication user to the slave.

At time of master failure, we want to write to slave. From slave's MySQL console:
mysql> STOP SLAVE;
mysql> RESET MASTER;
Switch front ends to write to slave. Once master is repaired issue the CHANGE TO MASTER command described in the previous post, modified as needed. Master will slave the missed writes. Once caught up, we fail back to master, and on slave issue START SLAVE.

So nah!

MySQL Replication

I've had a dozen people tell me that setting up a redundant MySQL server was just too difficult to be worth the effort. After all, what was the likelihood that the database server would ever go down? I mean... its not like tires on cars ever going flat, why would a computer ever break? Besides, the server is on a UPS.

Well... Bad news DBA's: I got it working in under an hour (not counting deploying the VMs and config'ing the base environment.) Its remarkably straight forward: the high altitude view is defined in RTFM 5.0 16.1.1.7, but of course, there are a few tricks.

Assuming a new cluster build, one will be a master, a second a slave. Start mysqld and verify functionality on both servers. On master, add to /etc/my.cnf:
log-bin=mysql-bin
server-id=2024561111 # some random value
On slave, add to /etc/my.cnf:
server-id=7035713343 # some random value
report-host=slaveXYZ # bonus: name of slave
Restart both servers. Verify

Log into master's MySQL console and add the replication user:
mysql> CREATE USER 'replicant' IDENTIFIED BY 'password';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'replicant';
Here's another trick: The replication is going to execute across the network in the clear, so ultimately, we want to push this through an SSL tunnel. With stunnel, we can specify the user as 'replicant'@'127.0.0.1' (not localhost).

Determine the master's log coordinates: (yes, that's what its called)
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
UNLOCK TABLES;
We need to document the File and Position. Technically, the LOCK and UNLOCK are not needed since this is a new cluster. The right way to do this is to delay the UNLOCK until you verify slave is connected.

Log into slave's MySQL console and define the master:
mysql> CHANGE MASTER TO
-> MASTER_HOST='127.0.0.1',
-> MASTER_PORT=1234,
-> MASTER_USER='replicant',
-> MASTER_PASSWORD='password',
-> MASTER_LOG_FILE='mysql-bin.000001',
-> MASTER_LOG_POS=304;
Another trick here: since we are going to use an SSL tunnel, we need a custom port value. Again, use the IP address, not localhost, to force it the TCP stack. Notice we used the file and position documented above.

The big gap in the documentation is its failure to tell you to actually start replication. On the slave:
mysql> SHOW SLAVE STATUS\G
<snip>
  Slave_IO_Running: No
  Slave_SQL_Running: No
<snip>
mysql> SLAVE START;
mysql> SHOW SLAVE STATUS\G
Did it work? Check the master:
mysql> SHOW SLAVE HOSTS;
Under Host you should see the "bonus" name specified in the slave's /etc/my.cnf.

And that's it. A big misconception seems to be that slave pulls from master, which is accurate, but not true. When master executes a write, it "pings" slave. If slave is up, it executes the pull. This could pose a problem if slave is DHCP. If slave is offline, it resyncs on start, which creates load on master. Depending on master's load, our systems could experience "lag".

Oh, and yes, I'm referencing 5.0, because that's what's included with RHEL5. My old copy of RHEL 6 Beta had 5.1 (I really should get a newer copy.) I glanced over the MySQL 5.5 docs, no big differences.

Blake Griffin Leaps Over a Kia for a Slam Dunk- SI.com

In LA, Sprite held a slam dunk contest at the Staple Center. I was impress by two shots. Wizard's Javale Mcgee did an impressive two basket, two ball, flying dunk. But the winner was Clippers rookie Blake Griffin leaping over the hood of a Kia, intercepting a basketball mid-air, and slamming it in while accompanied by a fifty person church choir.

Saturday, February 19, 2011

Stupidest Google Ad of the Week

I had just sent an email about an upcoming movie called "Appolo 18" and got this advert on my Gmail. "NASA Coupons" "Save now on NASA" What?

I had to click it. I got a coupon for Green Giant frozen vegetables. Or more correctly, I was given the chance to provide my email to someone for some reason, which may or may not have anything to do with NASA. Go ahead and click the picture above to see what you get... The more we click, the more it costs them.

Thursday, February 10, 2011

Interactive Organized Crime Map

Really cool interactive organized crime map on Wired. It seems smuggling Asian wood products is more profitable than human trafficking. Good to know.

Saturday, February 05, 2011

Go, Mark Kelly, Go

Astronaut Mark Kelly has the opportunity to command the last shuttle mission, yet there are people that don't approve. Not because he isn't qualified, but because they think that he should be by his wife Rep. Gabrielle Giffords side while she recovers from an assassination attempt by some drugged out whack job. Guess what people: your opinions on what he should do are worthless.

Is he qualified? Does he have his head on straight? Is he focused on the mission? Will his absence result in his wife's death? Those are the important questions... not should he do it.

Friday, February 04, 2011

Restrict Concurrent Remote Logins

I stumbled upon an interesting puzzle. I was asked to configure a system that would only allow a user to SSH from one remote address at a time, but allow multiple logins from that location. Furthermore, they can login from where ever they want, but never from two locations. Oh, and the restriction can't block the TTYs or Xterms.

I'd never heard of a scenario like that. There are lots of bells and whistles to lock down a system, but this one caught me off guard. After a few minutes searching the Interweb, I decided to whip out a hack:
#!/bin/sh
#
# Restrict concurrent logins from multiple locations
#

MYT=`tty | sed "s~/dev/~~"`
MYL=`who | grep "$MYT" | awk '{print $NF}' | sed 's/[()]//g'`
MYC=`who | grep "$USER.*\..*\." | grep -vc "$MYL"`
if [ "$MYC" -gt 0 ]; then
  echo "Logged in on $MYT from $MYL"
  echo "Other remote locations:"
  who | grep "$USER.*\..*\." | grep -v $MYL | \
    awk '{print $NF}' | sed 's/[()]//g' | sort -u | xargs echo " "
  echo "Too many remote logins. Good bye."
  logger -p authpriv.warn "Killed remote login: $MYT $MYL"
  ps | grep -m 1 "$MYT" | awk '{print $1}' | xargs kill -9
  #fuser -k `tty`
fi
Here's the flow:
  1. Determine our current TTY
  2. Get our remote address (client address)
  3. Are we logged in from another address...
      where the other address has two dots...
      and is not our own address
  4. If so, print all sorts of helpful information
      Note: comment "echo" lines in the wild
  5. Log the event
  6. Kick the bastard off
    fuser *should* have worked :(
Save it as /etc/profile.d/location.sh and it will automatically be called after SSH authentication.

*** Update ***
A big mistake to avoid... Don't use the exit command in any script in the /etc/profile.d directory. This will cause the login process to exit, not the script.

Sunday, January 30, 2011

Downgraded My Netflix

...to one disk at a time, because I've seen it all. Who would have thought it possible. Maybe I should get a life.

Or better yet, maybe I should post more technically useful information and less whiny personal drivel. After all, this isn't MySpace, or MySpace-TNG.

(Yes, that was another Facebook jab.)

Wednesday, January 26, 2011

So... Get With The "Novating"

One of the things that I truly love about CNN is the fact that its impossible to link to a story, because they are keep changing the news. Its not their fault: history is constantly changing and it is their duty to make sure that history matches the presents view of how things should have been. But that's not my point... My point is this morning's headline about the State of the Union Address:
Obama: "We must out-innovate the world"
They do have a transcript of the speech, which quotes the President as saying:
And now it's our turn. We know what it takes to compete for the jobs and industries of our time. We need to out-innovate, out-educate, and out-build the rest of the world.
Here's my question: If you are not negative, aren't you positive? That's called a double negative. If a double negative is a positive, than don't a negative and a positive cancel each other out? If you are not positive, then you are neutral. You are not (negative) and positive, therefore... nothing. Zero.

Still with me?

If you work in a building, and you walk out or the building, you are not in the building. You are either out or in. You can be out of the office, but still be in the building, because you are in the cafeteria, which is also in the building. But you can't be out of the office and in the office at the same time.

Simple, right?

So if we out-innovate, and you can't be out an in at the same time, all that's left is to novate. Turns out novate is a word. As a matter of fact, its a legal term, which a Harvard educated attorney like President Obama would know. It means:
novate - replace with something new, especially an old obligation by a new one
In other words, the President says we don't have to honor our old obligations.

Yeah! I choose to novate my mortgage, first. Be this public notice: By order of the President, Doug no longer has to pay his mortgage.

And what are you novating?

Monday, January 03, 2011

Google Maps Hack for Bash Scripts

Here's a fun little hack. This lets you query Google maps from a bash shell script:
J="chicago"; \
K=`echo $J | sed "s/ /_/g"`; \
elinks --source http://maps.google.com/maps?q=$K | \
sed "s/}/\n/g" | grep "hnear\|latlng:" | \
sed -e "s/,+/+/g" -e "s/[,={\"]/\n/g" | \
grep "\+\|l..:[0-9\-]"

Chicago+Cook+Illinois
lat:41.878113999999997
lng:-87.629797999999994
Feed it "dakar" by setting $J, and you get:
Dakar+Region+Dakar+Senegal
lat:14.75
lng:-17.333333
What good is it? None, I'm sure.