May 272013

I’ve been using a combination of scripts to do local backups on my Amazon EC2 micro instance I use to serve this website – AutoMySQL Backup and some cron jobs which ran rsync for various paths on a rotation. For example:

rsync -a /var/www/html /mnt/backup/filebackup/weekly
rsync -a /var/lib/g2data /mnt/backup/filebackup/weekly

This is frankly a pretty lazy way to do it. I’m not protected at all if something wipes out the backup destinations or the EBS drive goes bad, and this method uses up a lot of EBS space because there are multiple sets of the files. I could use the AWS EC2 framework to script out EBS snapshots, but that’s just going to further increase my monthly Amazon bill without any ability to be very specific about point in time restores for files. Instead, I thought I should make use of something that I’m already paying for: CrashPlan.

As noted in a previous post, I’m using CrashPlan to backup our desktop & laptop computers, as well as my file server (Synology NAS). I have a CrashPlan+ Family Unlimited plan which means I can add up to 10 computers and store unlimited backups from them to the CrashPlan cloud included in the plan (local backups, or peer to peer Crashplan backups are always free and don’t require a plan).

Install CrashPlan to a Amazon Linux AMI

CrashPlan offers a number different clients, including a headless java client for Linux. This is perfectly suited to the micro instance I’m using in EC2 – the Amazon Linux AMI which is based on RedHat/Centos. I installed the headless client using the following options – note that I’m using the latest version at the time of this post (3.5.3) in my commands below, but you can see the latest download link on their page. I’m also using sudo in my commands, you can remove pr ignore that if it is isn’t needed in your Linux configuration.

sudo yum install grep sed cpio gzip coreutils
sudo tar -xzf CrashPlan_3.5.3_Linux.tgz
cd CrashPlan-install/

At this point the installer launches and will ask questions about where the files should go. Their suggestions are reasonable for my configuration and I was able to simply follow the defaults, hitting enter the whole way through the install. Once the install finishes, it will start the CrashPlan service automatically.


Connect to the headless CrashPlan Linux server with a remote client

Now that the service has been started, a remote client needs to connect to the server in order to further configure backup options. The easiest and most secure way to do that is by making use of SSH’s ability to tunnel to the server. The following instructions are for Windows, but similar steps can be performed on other operating systems. First, install the CrashPlan client if it is not already installed on your computer, but don’t start the program. Next, locate and edit the file using a text editor. This file is typically located here: C:\Program Files\CrashPlan\conf\ for Windows systems. As shown below, remove the # to uncomment the line, and change the port to 4200. When done, save the file and exit.

Edit servicePort for

Next the SSH tunnel needs to be enabled for the client to connect to the server via SSH. Open PuTTY and create a new connection to your Linux server. Under the configuration menu, navigate to Connection, SSH, then click the Tunnels option in the menu. On that page, enter “4200” as Source port, enter “localhost:4243” as the Destination, and click the Add button. Once completed, connect to the server as normal via the configured SSH session and leave the terminal window open.

putty tunnel add

putty tunnel added

At this point the CrashPlan client can be started. It will first ask for CrashPlan credentials, then display the usual interface. Note that the compression and dedupe options can be resource heavy – which means during the first backups for the server it will likely consume a lot of CPU, particularly for EC2 micro instances which have low CPU throughput (bursting) to start with. This CPU usage should reduce over time as the backup deltas get smaller.

crashplan ui linux remote

Configure CrashPlan Linux to backup /var or other hidden directories (if needed)

Note that by default, several directories and file structures are hidden in the CrashPlan client for Linux. In my case I want to backup files under /var, as that is where my gallery2 files reside, as well as my web content. In order to expose that folder structure for CrashPlan the my.service.xml configuration file should be edited, and the “pattern regex=”/var/” line under the Linux area should be removed. First stop the CrashPlan service and edit the config file (assuming you installed using default file paths):

sudo service crashplan stop
sudo vi /usr/local/crashplan/conf/my.service.xml

Next, look for a line like this under the Linux area and remove the following data from the file (e.g. dd in vi):

<pattern regex="/var/"></pattern>

Save the file (Esc, :wq + enter in vi) and then start the service back up. After connecting again using the client, the /var folder should now be visible.

sudo service crashplan start

crashplan ui linux remote file selection

May 232013

iTunes and Amazon MP3 have been dominating music sales for years now, but there is plenty of evidence that providers have wildly different strategies when it comes to pricing. This is made all the more absurd by Amazon’s AutoRip functionality, where the lines between a digital & physical purchase are even more blurry.

Exhibit A is Vampire Weekend’s Modern Vampires of the City album. As one might expect given the lower productions costs the digital version is dramatically cheaper than the physical CD:
Vampire Weekend prices on Amazon

Exhibit B is Daft Punk’s new Random Access Memories album. The incredible thing is this example is that you can get a digital copy of the album cheaper by buying the physical CD – Amazon gives you instant access to the digital versions as well via AutoRip:
Daft Punk RAM on Amazon

Both of these are new albums from popular acts, released a week apart. There are those that argue that the digital instances should be priced at the same or higher than physical, since the user gets the benefit of instant gratification. However, in the case of Amazon purchases this logic no longer applies. One could certainly argue that the Daft Punk offering has much more drive given their advertising blitz, but I find it curious that these two examples are so different.

Dec 082010

Too much money, too much downtime. That sums up my experience with my web hosting company over the last couple years. My shared hosting account costs about $120/year, and sure didn’t feel like I was getting my money’s worth. Due to ridiculous email downtime and failing SSL certs I moved email services to Google apps a year ago. That helped a lot, but many times over the last few months my web server has gone down for hours at a time.  The alternatives were to deal with another unknown web host (with likely the same problems), or buy a VPS (virtual private server) for a more than what I am paying now. Amazon web services was out of the question for personal use, as a small instance was $300 per year.

All that changed this fall though. Amazon introduced micro instances – 613 MB RAM machines which provide a small amount of consistent CPU and burst CPU capacity when additional cycles are available. This is plenty of horsepower for my little website; probably more overall resources than it had access to in shared hosting. Importantly the pricing is quite reasonable when you consider reserved instances. The three year reserved instance works out to $88.65 per year, plus storage and bandwidth costs (minimal in my case).   The real kicker though is that Amazon is eating the costs for micro instances + services for one year with their AWS Free Usage Tier to try to get more customers using AWS.

You can’t beat free, right?  This sounds like a hell of a deal, and it is. But this does come with hidden costs – your time and experience with two aspects:

  • The Amazon Web Services platform. I’m pretty familiar with AWS already – I’ve been using Amazon Web Services (EC2, EBS, S3, etc) at work for about two years now. It is great for being able to expand out with as much processing power as you want. Though things have been quite simplified these days (boot from EBS, elastic IPs, web control panel, etc), the service and concepts can have a fairly steep learning curve if you are new to it.
  • Configuring & running a Linux server. I’m using Amazon’s Linux image for my server with mysql and apache installed. Getting applications like Gallery and WordPress running happily on a new server does take some reading up if you aren’t familiar with linux and web concepts (e.g using yum to install dependencies, editing config files to enable php modules and htaccess). You also need to think about things which are normally taken care of by your web hosting provider, like backups.

This page is being served to you from my Amazon micro instance.  It took me an afternoon to transfer my files from my old hosting provider and get everything setup correctly on Amazon. If you were new to the platform or Linux, it would take longer than that. But if treated as a learning experience, it is an amazing opportunity. AWS Free Usage Tier lets you try out a server for a year for free, that’s pretty damned amazing.  Frankly, I don’t know of a better learning lab – you can pick and choose from hundreds of starting images, destroy them, and start fresh at any time easily and quickly with no cost.

The real question is – will I still think it is a good idea to run my own web server a year from now?  Probably not, but it was an fun little project.

Oct 072010

The first round of SSH clients for the iPhone presented some problems when connecting to Amazon Web Services EC2 Linux server instances. EC2 instances require a private certificate key file to be used to authenticate to the server during an SSH session. This lead to some workarounds where one had to export the iPhone’s key and add that key to the EC2 server instance. This wasn’t much fun to do. Thankfully, the latest versions of many SSH apps for the iPhone support private key imports. For my example bellow I’m going to be using the iSSH app:

1. Find the .pem key file saved during keypair creation in Amazon Web Services for the instance you launched.

2. Get the the content of the .pem file into the iPhone’s copy/paste memory. There are several ways to do this, here are two of them:

– 2a. Save the pem file to dropbox and open the file on the iPhone using the dropbox app (note you likely need to rename the pem to .txt in order for iOS to allow you to read the file).

– 2b. Open the .pem file with a text editor and copy the contents into a new email to an iPhone account

3. Open iSSH, go to General Settings -> Configure SSH Keys -> Import Key…

iSSH home screen

4. Paste the content of the .pem file into the lower text box; ignore the Key Password field unless you have specified one when generating the key separately (Amazon keys don’t typically have passwords).

Save the private key file

5. Go back to the iSSH home screen and select Add Configuration…

6. Select the Use Key and select the key file saved earlier.

Selecting the key

7. Save the configuration and connect to the server instance.

Connected to AWS EC2 Linux server