Yes... From all indications, VPSLink is out of business:
* Their phones lines do not work.
* E-mails are unanswered.
* New support tickets cannot be opened.
* New forum threads cannot be created.
They will let you open a new account. They will continue to bill your credit card. If you have a functioning VM, it will continue to operate. If you have any trouble with your service or account, their is no avenue for support.
As a result, my only available course of action was to cancel my account.
Monday, November 10, 2014
Saturday, July 05, 2014
IPtables Blocking KVM Bridge
Recently, I've been having problems with VM networking on RHEL KVM hosts. The initial symptom is the VM cannot get a DHCP address from the physical network, through a bridged virtual NIC. I've determined the problem is the with the FORWARD chain of IPtables.
Assuming a bridged ethernet called br0, I've added the following rules:
This configuration is that it route all traffic, which some might consider bad form. If the KVM host only ran a fixed set of VMs, it might be wise to lock down specific ports. In a dynamic environment (like my lab) the level of effort to support IPtables on the host exceeds the risk, as the VMs are all running IPtables themselves.
Assuming a bridged ethernet called br0, I've added the following rules:
iptables -I FORWARD -i br0 -o br0 \This will allow UDP, TCP, and ICMP initiated from the physical network to be routed to VMs attached to the br0 bridge.
-m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -I FORWARD -i br0 -o br0 \
-m state --state NEW -j ACCEPT
This configuration is that it route all traffic, which some might consider bad form. If the KVM host only ran a fixed set of VMs, it might be wise to lock down specific ports. In a dynamic environment (like my lab) the level of effort to support IPtables on the host exceeds the risk, as the VMs are all running IPtables themselves.
Monday, June 30, 2014
Auqaponic System
I've constructed a small auqaponics system in my backyard green house. Here's a video of the first iteration. I have been impressed at how little water loss I'm experiencing in the system. Admittedly, the system is over-engineered, but that is normal for a prototype.
It's better to put too much into a test system, and introduce incremental cost savings over time, then to cut corners that might lead to total failure.
Thursday, April 17, 2014
BeagleBone Black Console Cable
I recently got a BeagleBone Black from Adafruit. Immediate access was easy using the USB interface:
I wanted to watch boot sequence, but the USB does not come online until the end of the init process. I had purchased a USB to serial adapter, but did not find an obvious path between the "Prolific Technology PL2303" converter, and the serial header on the BeagleBone. The lack of documentation should not have been a surprise: most people have no interest in this feature.
To make matters more interesting, the picture on the Adafruit website was off by one pin. Luckily, Google Images was able to find the blog of Christophe Blaess, who had posted a picture of the correct pin-out:
The blog is written in French, which I don't speak, but a picture is worth a thousand words.
sudo screen /dev/ttyACM0 115200To disconnect, press [Ctrl][A], followed by [k], and respond [y].
I wanted to watch boot sequence, but the USB does not come online until the end of the init process. I had purchased a USB to serial adapter, but did not find an obvious path between the "Prolific Technology PL2303" converter, and the serial header on the BeagleBone. The lack of documentation should not have been a surprise: most people have no interest in this feature.
To make matters more interesting, the picture on the Adafruit website was off by one pin. Luckily, Google Images was able to find the blog of Christophe Blaess, who had posted a picture of the correct pin-out:
The blog is written in French, which I don't speak, but a picture is worth a thousand words.
Thursday, March 13, 2014
KVM Network Bridge
Finally, a simple way to configure a bridged network for a KVM server:
virsh iface-bridge eth2 br0
Monday, May 27, 2013
PVM, aka Beowulf Cluster
I stumbled upon a little project this weekend and found myself without a Beowulf cluster to help out. It had been several years since I'd built a computational cluster, so I noticed a few "new" gothchas. But... before we get to the fun stuff, let's review:
To set up the absolute simplest PVM, we need two nodes, with an NFS share, and a user account. The user needs an SSH key pair distributed to all nodes such that the user can login to any machine, from any machine. Each node's hostname must be able to resolve via DNS or /etc/hosts. Each node's hostname and address must be statically configured, and cannot be "localhost".
The first step is to install the base package from an EPEL repo. (I'm using Scientific Linux 6.) The package is delivered as source and must be compiled with a minimal set of options:
On only one of the nodes (it doesn't matter which one) validate that PVM is not running, configure the PVM_ROOT variable, and start the first instance as the non-root user:
Just for fun, let's throw it the simplest of compute jobs:
- No, Beowulf is not "dead technology"
- No, Hadoop is not the perfect tool for every job
To set up the absolute simplest PVM, we need two nodes, with an NFS share, and a user account. The user needs an SSH key pair distributed to all nodes such that the user can login to any machine, from any machine. Each node's hostname must be able to resolve via DNS or /etc/hosts. Each node's hostname and address must be statically configured, and cannot be "localhost".
The first step is to install the base package from an EPEL repo. (I'm using Scientific Linux 6.) The package is delivered as source and must be compiled with a minimal set of options:
yum install -y pvm --nogpgcheckThis shows us where the RPM installed the source. The issue with this incarnation is that it is still configured for RSH rather than SSH:
rpm -ql pvm | grep -m1 pvm3
/usr/share/pvm3
export PVM_ROOT=/usr/share/pvm3/Unfortunately, there are still hard-coded references to RSH in some of the binary libraries, so we spoof the references with a symlink:
cd $PVM_ROOT
find . -type f -exec sed -i "s~bin/rsh~bin/ssh~g" {} \;
make; make install
ln -s /usr/bin/ssh /usr/bin/rshRepeat these steps on all (both) nodes.
On only one of the nodes (it doesn't matter which one) validate that PVM is not running, configure the PVM_ROOT variable, and start the first instance as the non-root user:
ps -ef | awk '!/awk/ && /pvm/'Notice that the PVM deamon launched and remained resident. Individual commands can be piped to PVM, or an interactive console can be used. From the same node, remotely configure the next node:
echo "export PVM_ROOT=/usr/share/pvm3" >> ~/.bashrc
echo id | pvm
pvm> id
t40001
Console: exit handler called
pvmd still running.
ps -ef | awk '!/awk/ && /pvm/'
compute <snip> /usr/share/pvm3/lib/LINUXX86_64/pvmd3
ssh pvm2 'echo "export PVM_ROOT=/usr/share/pvm3" \The last line is the very, very, important. From the first node, remotely start the second node:
>> ~/.bashrc'
# should not prompt for a password
ssh pvm2 'echo $PVM_ROOT'
/usr/share/pvm3
ssh pvm2 'rm -f /tmp/pvm*'
pvmIn this sequence, we have accessed the console on pvm1 to view the clusters configuration (conf). Next, we started the second node. It is now displayed in the cluster's conf.
pvmd already running.
pvm> conf
conf
1 host, 1 data format
HOST DTID ARCH SPEED DSIG
pvm1 40000 LINUXX86_64 1000 0x00408c41
pvm> add pvm2
add pvm2
1 successful
HOST DTID
pvm2 80000
pvm> conf
conf
2 hosts, 1 data format
HOST DTID ARCH SPEED DSIG
pvm1 40000 LINUXX86_64 1000 0x00408c41
pvm2 80000 LINUXX86_64 1000 0x00408c41
Just for fun, let's throw it the simplest of compute jobs:
pvm> spawn -4 -> /bin/hostnameThere are a few things to notice about the output:
4 successful
t8000b
t8000c
t4000c
t4000d
pvm>
[3:t4000d] pvm1
[3:t4000c] pvm1
[3:t4000c] EOF
[3:t4000d] EOF
[3:t8000b] pvm2
[3:t8000b] EOF
[3:t8000c] pvm2
[3:t8000c] EOF
[3] finished
- The command asked the cluster to spawn the command "/bin/hostname" four times.
- The "->" option indicates we wanted the output returned to the console, which is completely abnormal... we only do this for testing.
- The prompt returned before the output. The assumption is that our compute jobs will take an extended period of time.
- The responses were not displayed correctly. They were displayed as they returned, because all this magic is happening asynchronously.
- Each job's responses, from each node, could be grep'ed from the output using a unique serial number, automatically assigned to the job.
echo halt | pvmFinally, remember this one last thing: The cluster is a peer-to-peer grid. Any node can manage any other, any node can schedule jobs, and any node can issue a halt.
Monday, March 11, 2013
Fun with Unicode Characters
Whenever I am tasked with creating a web page, it ends up being the absolute bare minimum. (If you don't believe me, just visit dougbunger.com!) Of course I do it in the interest of fast rendering and bandwidth conservation... because I am a good Internet citizen. So here are some fun unicode graphics that can be used as web page icons. There are thousands of characters, but these seem to be a good cross platform sub-set.
And by the way: Excuse the font.
And by the way: Excuse the font.
This --> &
...is an ampersand.
← | ← | ↑ | ↑ | → | → | ↓ | ↓ |
↔ | ↔ | ↕ | ↕ | ||||
⇐ | ⇐ | ⇑ | ⇑ | ⇒ | ⇓ | ⇓ | ⇓ |
⇔ | ⇔ | ⌂ | ⌂ | ||||
■ | ■ | □ | □ | ▪ | ▪ | ▫ | ▫ |
▲ | ▲ | ► | ► | ▼ | ▼ | ◄ | ◄ |
○ | ○ | ● | ● | ◖ | ◖ | ◘ | ◘ |
✇ | ✇ | ✈ | ✈ | ✓ | ✓ | ❥ | ❥ |
➲ | ➲ | ➳ | ➳ | ➸ | ➸ | ➼ | ➼ |
Wednesday, March 06, 2013
Removing Old Linux Kernels
Today, I had trouble removing an obsolete kernel from my workstation. It should have been simple enough, but I tried to use yum erase rather than rpm -e, and kept running into errors. That is obviously the bad news, so let's make sure to report the good news: YUM is such an improvement over RPM alone, that it is smart enough to know which kernels are obsolete. For instance:
One small problem: I don't want to remove dot-19 because I have a driver problem with dot-22. I only want to remove dot-null. Here's the trick:
# rpm -qa kernelFirst, we determine the machine has three kernels. Second, we see that that it is running the most recent version, dot-22. Finally, YUM demonstrates that it is smart enough to erase the two old kernels, but not the current kernel.
kernel-2.6.32-279.el6.x86_64
kernel-2.6.32-279.19.1.el6.x86_64
kernel-2.6.32-279.22.1.el6.x86_64
# uname -r
2.6.32-279.22.1.el6.x86_64
# yum erase kernel
<snip>
Removing:
kernel x86_64 2.6.32-279.el6
kernel x86_64 2.6.32-279.19.1.el6
Is this ok [y/N]:
One small problem: I don't want to remove dot-19 because I have a driver problem with dot-22. I only want to remove dot-null. Here's the trick:
# yum list kernelThe critical success factors are to drop the arch and t0 add a dash(-) between the package name and the version number.
Loaded plugins: refresh-packagekit, security
Installed Packages
kernel.x86_64 2.6.32-279.el6
kernel.x86_64 2.6.32-279.19.1.el6
kernel.x86_64 2.6.32-279.22.1.el6
# yum erase kernel-2.6.32-279.el6
Sunday, February 10, 2013
RHEL6 Udev Rules
I recently moved my home workstation from Fedora to Scientific Linux 6, on the grounds that Fedora has diverged too far from the current RedHat distribution. Sure, bleeding edge is cool, but as a self professed Linux mercenary, I need to be in sync with what the real world is doing... not what it might be doing.
After the move, I've found myself annoyed by the way the Gnome desktop handles removable media, in particular media cards such as Flash and Secure Digital (SD). One trick I learned a while back, was to make sure to assign an e2label to cards formatted with an ext filesystem. This way, when Gnome automounts the media and places an icon on the desktop, the name is the e2label. Without an e2label, the icon's text is the device size. This is also true of FAT devices.
The real problem, however, is the fact that the device is owned by root. Since the desktop is running as an unprivileged user (because we never login the GUI as root... right?) we are faced with an icon for an device that we can't drop-and-drag to. Doh! Here's how I used Udev to trick the system into allowing my GUI account to use these devices.
First, insert the device, allow it to automount, and appear on the desktop. (We won't worry with how the kernel, udev, fuse, and the desktop is accomplishing this.) Assuming an ext device, it was probably mounted to a dynamic mountpoint under /media; in this case, we ended up with:
Second, Udev rules are created as code snippets in the /etc/udev/rules.d dir. For simplicity sake, create a file called 99-local.rules and add all machine specific rules to this one file. Each rule is one line. There are many sophisticated and elegant things that can be done by Udev, but my example is a simple sledgehammer:
After the move, I've found myself annoyed by the way the Gnome desktop handles removable media, in particular media cards such as Flash and Secure Digital (SD). One trick I learned a while back, was to make sure to assign an e2label to cards formatted with an ext filesystem. This way, when Gnome automounts the media and places an icon on the desktop, the name is the e2label. Without an e2label, the icon's text is the device size. This is also true of FAT devices.
The real problem, however, is the fact that the device is owned by root. Since the desktop is running as an unprivileged user (because we never login the GUI as root... right?) we are faced with an icon for an device that we can't drop-and-drag to. Doh! Here's how I used Udev to trick the system into allowing my GUI account to use these devices.
First, insert the device, allow it to automount, and appear on the desktop. (We won't worry with how the kernel, udev, fuse, and the desktop is accomplishing this.) Assuming an ext device, it was probably mounted to a dynamic mountpoint under /media; in this case, we ended up with:
# mount | grep mediaThe goal is to modify a few mount options and change the ownership of the device. To accomplish this, we need to tell Udev to watch for a given device and respond in a specific manner. This requires isolating a unique aspect of the device that can be used a s trigger. The command to manage Udev changed with RHEL6:
/dev/sdb1 on /media/Flash_1GB type ext3
(rw,nosuid,nodev,uhelper=udisks)
# ls -ld /media/Flash_1GB
drwxr-xr-x. 4 root root 4096 Feb 9 15:55 /media/Flash_1GB/
# udevadm info --query=all --attribute-walk --name=/dev/sdbThere are a few things to notice about this output. On the command line, the name is the disk, not the mounted partition. The top most block is the device, blocks that follow are upstream devices. We are most interested in the ATTR fields. Don't be seduced by the first directive, "KERNEL=="sdb"... we all know that Linux is notorious for changing device letters on reboot.
<snip>
looking at device '/devices/<snip>/6:0:0:0/block/sdb':
KERNEL=="sdb"
SUBSYSTEM=="block"
DRIVER==""
ATTR{range}=="16"
ATTR{ext_range}=="256"
ATTR{removable}=="1"
ATTR{ro}=="0"
ATTR{size}=="2001888"
<snip>
looking at parent device '/devices/<snip>/6:0:0:0':
KERNELS=="6:0:0:0"
SUBSYSTEMS=="scsi"
DRIVERS=="sd"
<snip>
ATTRS{vendor}=="Generic-"
ATTRS{model}=="Compact Flash "
<snip>
Second, Udev rules are created as code snippets in the /etc/udev/rules.d dir. For simplicity sake, create a file called 99-local.rules and add all machine specific rules to this one file. Each rule is one line. There are many sophisticated and elegant things that can be done by Udev, but my example is a simple sledgehammer:
SUBSYSTEMS=="scsi",The first directive tells the machine that we're dealing with a disk (we could have used "block".) The second directive is an attribute that was listed for the device (notice the spaces: it has to exactly match the output from udevadm.) The third attribute is the device size, so this rule applies just to this card, or atleast to cards with exactly this number of sectors. The last part of the rule is the RUN command, which executes a set of bash commands. In this case, I'm changing the default mount options, then I'm changing the mount point ownership. Using the RUN feature provides infinite flexibility.
ATTRS{model}=="Compact Flash ",
ATTR{size}=="2001888",
RUN+="/bin/sh -c 'mount /media/Flash_1GB
-o remount,noatime,nodiratime,async;
chown doug:doug /media/Flash_1GB' "
Tuesday, January 15, 2013
Eclipse Plugins For RedHat
You know how they say you shouldn't look at the sun during an eclipse or you'll go blind? If there was any truth to that, why aren't there villages full of blind people in third world nations. Why aren't there myths about the time that everyone on Earth went blind? Think about it... There had to be a first eclipse. Who told the first dude not to look at it or he'd go blind? Read on for the answer.
In the mean time, I've been beating myself up for a few days trying to get the Epic plugin for the Eclipse IDE installed. I've got it on my Fedora desktop at home, and wanted it for a Linux machine at work, but the eclipse-epic RPM wasn't on Satellite. Simple enough: download it and install it from Epic-IDE.org... and spend the next several days wonder why it doesn't work.
The first issue is that the documentation has not been updated in several revisions, so the instructions for Epic are completely out of line with Eclipse 3.x. Every time I would try follow a path through the point an click menus that seemed reasonable, I would get a message such as "could not find jar" or "no software site found at jar". The obvious problem would be permissions, but all files and paths were world read-able.
The next obvious choice was to start hacking under the hood. I looked at the Fedora eclipse-epic RPM and compared it to the Eclipse install tree. I thought what seemed like a promising option when I found a plugins directory, but I could never get the machine to pickup the files.
Then, I tried something soooo stupid, it had to work. I entered some arbitrary text into an unrelated field. Of course it immediately launched exactly as expected. The trick is to understand that which undocumented fields are required, but not checked by the application. So, on Eclipse 3.x on RedHat Linux, to add a plugin:
Remember those permission problems from earlier? Its not that we didn't have permission to the files we just installed, its that we don't have permission to the Eclipse installation tree. So...
As for why the first human didn't go blind during the first eclipse? He was too busy trying to figure out why his wheel wouldn't roll, because the instructions didn't mention that it had to be upright.
In the mean time, I've been beating myself up for a few days trying to get the Epic plugin for the Eclipse IDE installed. I've got it on my Fedora desktop at home, and wanted it for a Linux machine at work, but the eclipse-epic RPM wasn't on Satellite. Simple enough: download it and install it from Epic-IDE.org... and spend the next several days wonder why it doesn't work.
The first issue is that the documentation has not been updated in several revisions, so the instructions for Epic are completely out of line with Eclipse 3.x. Every time I would try follow a path through the point an click menus that seemed reasonable, I would get a message such as "could not find jar" or "no software site found at jar". The obvious problem would be permissions, but all files and paths were world read-able.
The next obvious choice was to start hacking under the hood. I looked at the Fedora eclipse-epic RPM and compared it to the Eclipse install tree. I thought what seemed like a promising option when I found a plugins directory, but I could never get the machine to pickup the files.
Then, I tried something soooo stupid, it had to work. I entered some arbitrary text into an unrelated field. Of course it immediately launched exactly as expected. The trick is to understand that which undocumented fields are required, but not checked by the application. So, on Eclipse 3.x on RedHat Linux, to add a plugin:
Extract (unzip) the pluginThe plugin options should appear in the pop-up window. From this point, it should just be a case of checking boxes and accepting defaults. Right? Wrong! Now we get dozens of lines of Java-esque errors, which for those of you who have ever worked with Java is line after line of completely useless garbage. Take for instance the line:
Launch Eclipse
Help / Install New Software...
Click Add... and Local...
Browse to (not into) the extracted directory
Click OK
*** In the Name field, provide some text ***
Click OK
No repository found containing: osgi.bundle,A reasonable person might think that this and the fifty lines that follow it are telling you that there is a missing dependency. Obviously, what that means is that the only way to install the plugin is to be root.
Remember those permission problems from earlier? Its not that we didn't have permission to the files we just installed, its that we don't have permission to the Eclipse installation tree. So...
Exit EclipseContinue from "Click Add..." in the step above.
Open a terminal window and su to root
Launch Eclipse from the command line
Help / Install New Software...
Select "Available Software Sites"
Highlight the failed plugin
Click Remove and OK
As for why the first human didn't go blind during the first eclipse? He was too busy trying to figure out why his wheel wouldn't roll, because the instructions didn't mention that it had to be upright.
Subscribe to:
Posts (Atom)