Detecting IPv6 clients on a static site with nginx

This blog is IPv6-friendly and if you look over on the left sidebar, you might notice a special greeting if you’ve visiting this site over IPv6. The site is completely static, generated using Hugo, so I’m employing a small nginx+CSS trick to make this work.

To do something similar for your site, you’ll first need to create a HTML element and give it a unique class, like ipv6-detect:

<div class="ipv6-detect">
  Thanks for visiting over IPv6.
</div>

Next, you need to create two CSS files in your document root, one for IPv4 users and one for IPv6 users. For example:

In /css/ipv6.css:

/* Anything with this selector will be displayed on the page */
.ipv6-detect {
    display: block;
}
/* Anything with this selector will be hidden from the page */
.ipv4-detect {
    display: none;
}

and in /css/ipv4.css:

/* Anything with this selector will be hidden on the page */
.ipv6-detect {
    display: none;
}
/* Anything with this selector will be displayed on the page */
.ipv4-detect {
    display: block;
}

Now, we need to include the CSS in our HTML but we’re going to do it a little differently. We’re going to include only /css/ipv6.css and we’re going to rely on nginx to switch out the ipv4.css file for IPv4 clients.

<link rel="stylesheet" href="/css/ipv6.css">

Finally, we just need to configure nginx to serve the correct file to the user.

In your nginx.conf (or site configuration file):

# Force the "Cache-Control: no-cache" header when serving these CSS files.
# This keeps the client from caching them and displaying the wrong thing
# if they switch IPs from v6 to v4 or vice-versa in the future.
location ~ ^/css/(ipv6|ipv4)\.css {
   expires -1;
}

# Check $remote_addr for something that looks like an IPv4 IP.  It's a
# lame regex that's not perfect but good enough to deal with the already-
# clean IP addresses that come in $remote_addr.
if ($remote_addr ~* "^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$") {
   # This appears to be an IPv4 client, so we send them the ipv4.css instead.
   rewrite ^/css/ipv6.css$ /css/ipv4.css break;
}

Debugging APRS clients with a virtual null-modem cable using socat and tnc-server

While working on my GoBalloon project, I found myself needing to connect two AX.25/KISS APRS clients together for debugging purposes.   If your computer has two hardware RS-232 serial ports, you can accomplish this by connecting a null modem cable between the two ports and connecting an APRS client to each port.   I discovered an easier way to do this today and you don’t even need a serial port at all.   The trick is to use the socat utility.  socat is available in most Linux distros and there are a few Windows ports out there, as well.

To create the virtual null modem cable, run socat like this:

% socat -d -d pty,raw,echo=0 pty,raw,echo=0

2014/08/10 19:08:28 socat[25083] N PTY is /dev/pts/3
2014/08/10 19:08:28 socat[25083] N PTY is /dev/pts/4
2014/08/10 19:08:28 socat[25083] N starting data transfer loop with FDs [3,3] and [5,5]

As you can see above, socat will respond with two virtual serial ports (ptys).  In the example above, they are /dev/pts/3 and /dev/pts/4.

Once you have those, simply fire up your APRS clients and connect each of them to one of those virtual ports.   Everything sent by one client will be copied to the other client and vice-versa.

If you are debugging an APRS client that uses KISS-over-TCP, you can use my tnc-server utility to bridge the virtual serial port and the network.  Simply tell tnc-server to attach to one of those virtual ports and it will open a network listener that you can connect your KISS-over-TCP client to:

./tnc-server -port=/dev/pts/3 -listen=0.0.0.0:6700

 If you want to attach two KISS-over-TCP clients to each other, simply fire up a second instance of tnc-server that listens on a different port.

 ./tnc-server -port=/dev/pts/4 -listen=0.0.0.0:6701

From there, connect one APRS client to  <your_machines_IP>  port 6700 and the other client to port 6701.

The Jack-of-All-Trades Home Server

I’ve been wanting to build a server for my home for a while and I finally got around to it over the holidays.   My goal was to ditch all of the aging equipment in my office and consolidate it into one powerful, do-it-all machine.  It took several days of hacking to get it all working but now it lives:

The Server

Consolidation

Before this server, I had a bunch of old, crap hardware running in my office.  The gear was a decade old and couldn’t keep up with modern Internet speeds and the new applications.   I decided to use VMware’s (free!) ESXi hypervisor to consolidate all of this old junk onto one box.   Onto this one server, I consolidated:

To run all of these things and run them well, I needed some big iron.  Here’s what I built:

  • Supermicro X9SRH-7F server-class motherboard
  • Intel Xeon E5-2620 hex-core server-class CPU
  • 32 GB Kingston DDR3 1600 MHz ECC RAM
  • 4 x 2 TB Western Digital “Red” server-class hard disks (RAID 10)
  • Gigabyte/AMD Radeon HD7970 video card (for GPU-assisted password cracking, more on that later…)
  • Intel I340-T4 quad-port 1000 Mbit ethernet adapter (PCIe)
  • Corsair HX1050 power supply (1050 Watt)
  • Supermicro CPU heatsink for narrow-profile LGA2011 CPU
  • Corsair 600T mid-tower case

This machine is a monster.   Even running all of these virtual servers, its resources aren’t even 10% utilized.

The Firewall

My old firewall was a Soekris net4801 appliance (circa 2003) running m0n0wall.  It protected my home network for ten years, never once crashing.   The only real problem with it was that it can no longer keep up with modern home Internet speeds.  

Soekris net4801

These days, we have a 35 Mbps connection at the house but the Soekris tops out at a little over 21 Mbps.  When it maxed out its power, packets would drop and the Internet got flakey.  I wanted something that could handle 100+ Mbps with ease.   This new machine, with its four-port server-grade Intel NIC was the ticket.  With a tiny 1 VCPU, 1 GB VRAM virtual machine running pfSense, I can now push data as fast as Comcast will allow.

The old Soekris firewall:

The new, virtualized pfSense firewall:

The trickest part about running a firewall under ESXi is getting the networking correct.  The best (and in my opinion, most secure) way to do this is to use dedicated NICs for each network.  I use one port for my WAN (cable modem) and another for the LAN (internal private network).   I have another network, the DMZ, for servers that I don’t trust enough to run on the private network.  The Backtrack Linux VM goes here.  I use ESXi’s virtual switching to connect VMs to NICs:

The arrangement works well.  I get great throughput on the firewall and moving a machine between networks takes only a couple of mouse clicks.

Backtrack Server (My Very Own Evil Mad Scientist Laboratory)

I’ve been playing around with Backtrack Linux for a while now.  For those that don’t know, Backtrack is a Linux operating system designed for electronic security work.   It comes with a massive selection of exploitation, forensics, snooping, and analysis tools pre-installed.  If you were so inclined, Backtrack has most of what you need to break into networks and cause major havoc.  For its more altrustic users, it’s a outstanding toolkit to test your network’s security.   Ever wonder if someone can crack your WiFi key and snoop around on your home PC?  Backtrack has the tools you need to find out.  By running exploits against your own network and understanding its vulnerabilities, you can better secure your data.

One of my favorite tools on BT is Pyrit.  Pyrit is an open-source password cracker capable of cracking WEP, WPA, and WPA2 passwords for WiFi networks.  The superhero power of Pyrit is its ability to use your computer’s graphics card (GPU) to greatly accelerate the speed of password cracking.   A good GPU (like my ATI HD7970) can test passwords hundreds of times faster than the average PC CPU.  With this kind of power, it’s possible to bruteforce crack a password in a few days that might have taken years to crack on a conventional PC.  On an even more sinister note, Pyrit allows you to cluster multiple GPUs running on multiple servers together, potentially creating a massive password-cracking machine.   My ATI HD7970 is the fastest GPU available on the consumer market, yet it only costs $400 on Amazon.   Can you imagine what a rogue state like North Korea could do if they got their hands on a few dozen of these cards?   A security firm recently clustered 25 GPUs together and achieved 350 billion password guesses per second–fast enough to crack any Windows password in five and a half hours.   Very powerful stuff.   Very scary.  I had to build my own.

The biggest challenge I faced in my project was getting the HD7970 to be available to my Backtrack virtual machine.   ESXi provides a mechanism called “pass-thru” that lets you designate a VM to control a device like a GPU.   Unfortunately, the mechanism is poorly documented and I spent several days experimenting before I got it to work.   In the end, I had to enable pass-thru for the GPU devices and I had to add a line into the VM’s .vnx file:

pciHole.start = "2853"

I won’t go into the particulars of how I determined this value but you can find it if you Google around. There’s a procedure for editing .vnx files that you’ll need to follow. Again, Google it.   Once you get the hole “punched” and pass-thru working, you’ll need to install the ATI drivers and SDK on the VM.   The biggest problem I ran into is that the newest version of the SDK is needed to support this card, but this newest version is missing some critical libraries that are necessary to get CAL++ and OpenCL working.   What I did was install the older (more complete) SDK and then installed the newest version on top of that, which gave me everything I needed.   I also had to install the beta release of the ATI drivers because they’re the only version that supports the HD7970.  Finally (and this was a big, big stumper for me), I realized that I had to be actually running the Xorg X11 server (i.e. displaying a desktop on a monitor) for CAL++ and Pyrit to be able to “see” the GPU.  (Sorry for the tangent there, but I put all that in there to help the next guy who tries to do all of this.)

Once the GPU is working under Backtrack, you can run Pyrit’s benchmark and see some dramatic numbers:

The Backtrack Linux desktop

That first benchmark is my GPU.   As you can see, it’s over 200x faster than the cores in my server’s Intel CPU.

Future Possibilities

My server is not yet perfect.  Of all simple things, I haven’t yet figured out how to pass my motherboard’s USB controller through to the VM, so I can’t use a mouse or USB keyboard.  Unable to attach my mouse to Backtrack, I eventually came up with the idea to use Synergy to share my Mac Pro’s mouse and keyboard with the X11 server on the BT VM.   Since USB isn’t working, I can’t attach my Alfa AWUS036H USB wireless adapter to the Backtrack machine, so I can’t directly capture WiFi packets on the VM.   Instead, I have to use my Macbook Pro to capture the traffic and manually copy the pcap files to the BT VM.   In the future, I plan on setting up an IPsec VPN for the house and using the VPN to allow me to access the power of the big GPU from wherever I am in the field.   I might even be able to get Pyrix clustering working over the VPN.

The ESXi platform gives me great flexibility for future expansion.   If an operating system can run on a modern PC, it can probably run virtualized under ESXi.   If I ever have a need for it, I might fire up another VM and install Windows Server for my home network.  ESXi can even run Mac OS X!  I’m super-happy with my decision to consolidate and I’m loving my much-less-cluttered office.

Stop piping curl to /bin/sh

I’ve noticed a trend lately where software developers ask you to pipe the output of a HTTP GET to a shell to install their software.  It’s certainly convenient for the inexperienced shell user who might not be comfortable with Apt or Homebrew; never mind that we spent the late 1990s and 2000s building these tools that make it easy to install software!  The pipe-to-shell method is definitely the new hotness but this idiotic method has been around for a while.   When I was a clueless freshman at Vanderbilt in 1993, I used this technique install an IRC client on a SunOS machine in the computer lab like a total noob:

 telnet sci.dixie.edu 1 | sh 

At the time, I had no idea what that command did and I happily ran it.   Did it install a backdoor into my account on the VUSE systems?  Maybe.  It would have been ridiculously easy for the Dixie admins to do that.   It was a pretty spiffy shar(1) archive that packaged up some binaries and shell scripts in a uuencoded shell package.  All I knew is I ran that command and a bunch of shit scrolled across my screen for a few minutes and when it was done, I had an IRC client and I was happy.  

Sure, you could fetch the URL in your browser first and review the shell script but will most people do this?   How good are you at quickly reading a shell script?   Could you spot a well-hidden backdoor, a little bit of obfuscation tucked away in the middle of a huge regex?

The pipe-to-shell technique is showing up more and more these days.  RVM uses it…

So does CopperEgg…

pip uses a variation of this–only slightly less degenerate–where they ask you to download and execute some Python…

To be fair, pip is also available through popular package managers but they’re still pushing this method first to new users.

Why are we doing this?  It’s terrible from both security and maintainability perspective.   If a committer of one of these popular software packages gets their desktop 0wned, the users of their software might very well get rootkits installed on their servers without their knownlege.

What’s so wrong with good old apt and yum?  Unlike the pipe-to-shell method, these repositories have measures in place to authenticate packages and the very nature of a package allows the sysadmin to cleanly uninstall it at a later date.

Please stop piping curl(1) to sh(1) before it really becomes a thing.

Summer Days

The New Ride

Quad Latte

Happy New Year

White Christmas

Snow Plow Truck

I’m dreaming of a white Christmas…

Homebrew AVR Programmer

I finally got around to building a little Atmel AVR chip programmer using some perfboard and a ZIF socket.   Using CrossPack’s gcc cross-compiler, I can now compile for the AVR chips on my Mac and burn them directly to the chip without using a Linux or Windows VM.

I designed the programmer so that it can handle ATtiny25/45/85/2313 and ATmega48/88/168/328 chips, all in the same ZIF socket.  Spiffy.

My programmer connects to the Mac via a Pocket AVR Programmer from Sparkfun.

For what it’s worth, here is how I burn a compiled .hex image to an ATtiny2313 chip:

avrdude -p attiny2313 -c usbtiny -U flash:w:FILENAME_TO_BURN

Here’s an example how how I set the fuses on the chip:

avrdude -p attiny2313 -c usbtiny -U lfuse:w:0xe4:m -U hfuse:w:0xdf:m -U efuse:w:0xff:m

About Chris Snell

I am an engineering manager based in Washington State, USA.

On the weekends, I serve as a Captain in the United States Army Reserve.

I am a hacker of code, electronics, and old Land Rovers.

I got my start on VT100 terminals and SPARCstations running SunOS and I still miss them.