Tuesday, October 23, 2012

Linux KVM Disk Drivers

I was having a problem with storage device names on virtual machines running on a RedHat KVM host.  Occasionally, I'd build a VM and the storage device would be named /dev/sda and other times /dev/xvda.  I quickly found that if I created the VM using virt-install, I got a xvda device, and if I used virt-manager (the GUI app), I got the sda device. After some investigation, I've discovered where things went wrong.

First, the syntax of the virt-install command changed in RHEL 6, and I was still using the RHEL 5 command.  Rather than complaining, RHEL 6 would guess what it thought I meant.  Here's the wrong command:
virt-install -n server -r 512 -w bridge=virbr0 \
-f /var/lib/libvirt/images/server.img -s 10
The -f/-s options says to create an image file the is 10GB.
Here's what was implemented:
virt-install -n server -r 512 -w bridge=virbr0 \
-disk path=/var/lib/libvirt/images/server.img,\
Rather than complaining that the -f/-s options were deprecated, it invoked the new syntax and assumed I wanted an IDE drive, which on RHEL 6, is named as if it were an SCSI device.  We can force the paravirt driver by using the correct command:
virt-install -n server -r 512 -w bridge=virbr0 \
-disk path=/var/lib/libvirt/images/server.img,\
Second, the GUI does not allow a VM's disk type to be selected from the install wizard-- it always defaults to the paravirt driver.  To force a specific driver, on the last screen of the wizard, check the box for "Customize configuration before install".

This will open a new window listing the VM's hardware.  Select "Add Hardware", select "storage", and configure a second disk the same size as the first.  At this point, there is a pull down menu that will specify the driver.  Once the second disk is in place, remove the first.  Removing the first disk before the adding the replacement disk can cause problems.

Hint: Once you've modified the hardware, there is not "Apply" option.  Just close the window and the VM will launch.

Kitchen is Getting Close

Got the cabinets-- waiting on the counter top.

Turns out installation was not included with the vent hood, like that makes sense. What's that you say? "Nickel and dime"? Yep.

Sunday, October 07, 2012

Certified Ethical Hacker

I recently took the Certified Ethical Hacker (CEH) class and certification exam.  First, I passed.  Second, I was a little disappointed with the class.

Let's take a look at the first item: I passed the test.  How can anyone reasonably complain about passing a certification test?  Let me contrast the certification test with three other tests. 

RedHat certification tests are all hands on:  Here's a broken computer, fix it.  Generally speaking, if you have at least one year of experience with Linux, take a RedHat class, understand the hands-on labs, you can pass the test.  The ITIL advanced certifications are almost the opposite.  Unless you have several years of workplace experience making IT management decisions, the class is of little help with the certification exams.  In the case of ITIL's advanced certifications, the proctored, paper exams test your ability to apply their methods to your real-world experience.  And then there is VMware, whose certification is a multiple choice, computer based test.  A quick breeze through the VMware docs, and just about anyone can pass the test.  As such, VMware requires you to take their class before you can get the certification, which makes the VCP little more than an indicator of class attendance.

The CEH is a multiple choice, computer based, exam, like VMware's.  The difference, however, is that (having taken the class) I'm not certain I could have passed the test based only on what I learned in the class.  Even though the class is structured like a RedHat class, with lecture and hands-on labs, I feel the exam required some real world experience.

Don't get me wrong... I'm not saying that "simply" taking the class should be enough.  I do agree that a candidate should have some experience in the area of study, but I feel that the purpose of the class and labs should be to solidify what they've seen, fill the gaps in what they haven't, and help them identify where they are weak.

And this brings me to the class.

Every class starts the same way-- introduce yourself and say what you hope to get out of this class.  Most people say something like "I'm John, and I want to pass the test."  This time, I said:
I'm Doug, and for the last ten years customers have insisted that I implement obscure security protocols, but I've never seen someone demonstrate that they can successfully breach a properly configured system.  I'm hoping this class will provide some validation that there really is a threat more sophisticated than scripts looking for default passwords.
What did I learn from the class?  Three things:  Windows sucks, Linux is invincible, and once a month 10% of users should all be shot.

At this point, let me interject that the CEH course material was the highest quality training material I have ever seen.  They had color graphics, high quality artwork, no diagrams stolen from vendor brochures, the books event had spines... like a real book that you'd buy at a book store.  We got six disks worth of tools.  And goodies like a backpack and a T-shirt!  The first half hour of class was like being six years old on Christmas day.  The rest of the class was like the feeling you have after you've opened all the Christmas presents, and you realize that the its all over.

Some of the labs were interesting, but there are only so many times you can demonstrate that Microsoft has sacrificed security for usability.  After a couple days, the fact that insiders and stupid users are allowing access to the network was well worn.  There really was no need for more than one lab demonstrating that organizations expose too much information to Google and Netcraft.

I'm going to end with one last thought, and this really doesn't have anything to do with CEH.  Human beings can learn anything from books, but we like to be taught by other human beings.  An instructor provides three basic services in a class:
  1. Focus the student's attention on what is really important in the book, and identify the fluff and filler.
  2. When a student indicates they do not understand the book, they offer more detail or alternate examples.
  3. Provide value-add in the form of real world examples or relevant material outside of the book.
If you ever find yourself as a technical instructor, pay heed to what I'm about to say next:  If you can't fulfil at least one of the services above, simply being [ cool | fun | entertaining ] isn't enough.

Tracking SSH Tunnels

Native to Secure Shell (SSH) is the ability to create point-to-point, encrypted, tunnels.  The function was designed to provide legacy protocols, such as mail (SMTP/POP) with encryption.  A user could login to an SSH server in their company's DMZ, open a tunnel from their laptop to the server, and redirect their mail client through the tunnel.  On the surface, this sounds like a good idea: it protects the exchange of company data from the "protected" corporate intranet to users "in the field".

But, as with all good things, there is room for abuse.  Consider the opposite scenario:  What if a user inside the corporate intranet SSH'ed to the DMZ server and built a tunnel to allow them to surf the web, thus bypassing the content filters?

Granted, content filters are just a way for the man to oppress middle class workers.  By censoring free thought, the 1% is able to keep the 47% running on the hamster wheel of consumerism.  Hear me, my brothers!  There will come a day when the proletariat will raise up and declare their freedom from the jack-booted thugs of Wall Street and their Illuminati masters.

But I digress...  Where was I?  Oh yes, SSH tunnels. So the question is this:
How can we monitor the SSH tunnels defined on the server to ensure they are not being abused?
Much to my surprise, the answer is:  You can't.

There does not seem to be any mechanism for determining what tunnels exist, and here's why.  The tunnel is defined on the client end, where the SSH application is always listening.  When the client receives a packet that matches a tunnel, the packet is shipped to the server with handling instructions.  When the server gets the packet, it opens the needed socket, fires it off, then closes the socket.  In other words, the connection from the server to the destination is not persistent... it behaves more like UDP than TCP.

Since a socket is opened, it is possible to capture it with lsof -i, but since the socket is transient, trying to catch it in a while/do loop is a matter of pure luck.

This means we have two choices, one of which shouldn't count.

In order to catch someone using a tunnel to surf out of the DMZ, we need an IPtables rule to catch the outbound packets.  As it turns out, any packet originating from a tunnel will use the server's IP address as the source address.  We only need to log the initial connect, so we only need to log the SYN flag.  To further complicate things, our abusive user has to be using a proxy, so we can't restrict our checks on port 80 and 443.
iptables -A OUTPUT -s
  -o eth0 -p tcp --syn
  -j LOG --log-prefix "out-syn "
Here, we are looking for OUTPUT, since we are assuming that this DMZ machine is supposed to be building tunnels.  The (-s) address is the address of the DMZ machine.  In this case (-o) eth0 is the internet side of the machine and eth1 would be the intranet side of the machine.  Notice that no port number is assigned to the (-p) TCP statement.  Lastly, we are going to log this message.  (The trailing space in the quotes is significant.)

This rule will catch bad tunnels, but ignore good tunnels, on the grounds that good tunnels will use (-o) eth1 to get to the intranet resources.

If you'll recall, I said there were two choices.  The second is this:
iptables -A OUTPUT -s
-o eth0 -p tcp --syn
In this case, we are refusing all outbound TCP traffic from the DMZ machine.  (Since DNS is UDP, we can still resolve the addresses of the inbound SSH connections.)  As stated above, we are allowing the good tunnels, since they use (-o) eth1.

So which of the two rules shouldn't matter?  The first:  We shouldn't have to "catch" abusive users, we should just stop them.  Of course, we could use both lines to first log them and, second, prevent the connection.  This allows us to know who the abusers are, and bitch slap them for their feeble attempt-- for they are probably using Windows workstations, and deserve to be degraded.

What's that you say Mr. Boss?  You want me to prove abuse exists before locking down the DMZ.  Okay, we implement rule number 1, log the abuse, and then later lock down with rule number 2.

What's that you say Mr. Boss?  Prove the abuse exists without implementing rule number 1.  Ah...  No can do.

Oh well, if you want me, I'll be in my cube.  Listening to Slacker internet radio, via an SSH tunnel, through the DMZ.