Running Update

Quick update on my femoral stress fracture and running.

It has been just over 5 months since I was injured running the SF Half. Posts on the recovery have been slim to none, as there wasn't much to report.

Good news - I have run 2 miles straight twice now, with no pain during or after! Five months ago I couldn't walk a mile. To be able to knock out a few miles is an incredible, exhilarating feeling.

I will do a separate post about the specifics of my recovery so that others can benefit from my ... er ... experience.

Farros Capital Launches

My now "old" business partner/co-founder of MessageCast, Royal Farros has made some big announcements recently.

Last week, he announced that he has become the CEO of IMSI/Design (also forming the investor group that purchased the IMSI TurboCAD technology)

Today comes word that he has formed Farros Capital, which will focus on seed investment for startups. 

Good luck Royal, we'll miss you at the b0rg.


(Back in the day at Nasa)

 

A Good Build

What makes a build process Good?

We're in the midst of creating a new/merged system and (fortunately) have the opportunity to design a build process from scratch (well, almost - a few aspects of the process are "mandated").

My experience with builds has always been mixed. When I worked at Visa, I designed the build system for the VisaNet Access Point, v10 (last I checked, the VAP is now known as GATT). I also introduced version control (RCS) to a product that previously had none (hard to imagine that previously 40 Dev's had no SCM, isn't it??). The build was fast and effective. At KPMG and Deluxe, I had no role in the build process. Builds at iPrint were extremely weak for the first few years; we simply didn't put any time into it. Things didn't solidify until one of our QA guys (Scott) took the initiative to create a kick-ass system he called Garfield.

Drawing on my previous experiences, I wanted to do things differently with MessageCast. Almost from the start, I set out to create a build process that was solid. The requirements, which evolved over time, were:

* Totally automated
* Every build was clean; no artifacts could remain from a previous build
* Every build had a version number in the format xx.yy.zzz (e.g. 3.1.364)
* Every file used in a build was labeled, including source, images, css, jsp
* Unit tests must be performed after compilation, and include the ability to fail the build if a unit tests fail.
* Unit test reports must be available in HTML and easily viewed
* Code coverage must be performed on the unit tests and include the ability to fail the build if the coverage is not up to par (e.g. 90% coverage for methods)
* Code coverage reports must be available in HTML and easily viewed
* Every build must generate documentation based on the markup in the source (JavaDoc, etc)
* Every build must capture data/information produced at each phase (compilation, etc).
* Every build must produce email to an alias providing information on success/failure. If a build fails, the email should contain enough detail to determine who should look at the problem.

We constantly tweaked the build and were rewarded with a solid process that served us well.

Drawing on the MessageCast experience, we want the new build to follow the same requirements (with the addition of a code analysis tool) 

Currently, our tool choices look something like this:

1. SCM - SourceDepot (Perforce), Subversion
2. Build script - nAnt, msbuild, CoreXT
3. Unit Tests - nUnit, VSTS
4. Unit Test Reports - nUnit reports
5. Code Coverage - Clover, Magellan
6. Code Analysis - FxCop

Whichever tools we choose, our new process should be significantly better than the process it is replacing.

Ecto 2.0 Ships

Ecto 2.0 for Windows has shipped (it was in beta for a while)

It looks like they fixed some issues with MT (esp formatting) that were present in 1.8. Fingers crossed that they fixed the preview crash bug from 2.0 rc1

One issue still out there - if I add tags to a post and save it locally, the tags are missing when I re-open the entry.

10 Years Old - Happy Birthday iPrint

Ten years ago today, myself, Royal Farros and Mike Rubin (Letty Swank wouldn't join until August) convened at the inagural iPrint office next to the windtunnel at NASA/Ames in Mt. View. As the company gets older, our memories get foggier. More than a few folks think we started iPrint on a different day in May, but I an convinced it was on 5/1/1996.

The web was an unknown thing to many and leaving my job to head off into the wild was questioned by my friends, family members and soon-to-be in-laws. We had a lot of ideas on how to revolutionize print via the Web, but our first day was all about getting our new Compaq desktops to work on the in-house network. Sitting at our WWII-era metal desks, we didn't have any idea of the wild ride that was in store for us, culminating in a public offering on the NASDAQ in 2000. There were a lot of extreme highs and lows on the journey -- bittersweet no doubt.

Royal and I went on to start MessageCast in 2002 (Mike joined up that summer). Letty is still running iPrint, along with some of the first 20 hires including Igor and Britta (Igor has been at iPrint for 10 years this June!).

Happy birthday iPrint - wishing you continued success. 

Using S3 as a Media Store

Adrian Holovaty of ChicagoCrime has a post on using S3 to serve his media files.

This is a great idea for podcasters to off-load both storage and bandwidth issues at a pretty-good-price. Much more reliable than some of the solutions available on the market today. I read somewhere (?) that S3 is using the actual Amazon infrastructure, which makes sense given that Amazon is also doing private-labels for companies like Target.

Check out the comments in the post, Doug Kaye links to his proposal to have S3 function as a CDN. My guess - Amazon will do just this, especially given the ever growing market cap of Akamai (AKAM) which is at 4.64B currently

memcached for win32

Scott Johnson sent me a link to a presentation (PDF) the guys from LiveJournal gave at a Linux conference last year. (As you might imagine, scale is one thing LiveJournal has to monitor closely)

Part of the architecture uses memcached, a server-based object cache built by Danga (guys who run LiveJournal). The object cache helps reduce the number of queries made to the DB, which helps the site to remain speedy as the number of visitors increase.  (Here's a good article about it)

I learned to my dismay that memcached does not run on Win32 platforms due to some issues with libevent and memcached. After quite a bit of searching, I was ready to call it a night and started thinking about some of the vendors I had looked at the other day.  I decided to troll the memcached email list and lo and behold, a kind soul (Kronuz) had figured out the issues *and* posted binaries for both libevent and memcached. I quickly downloaded them and wrote a simple test client in Java. Firing up memcached was straightforward as was running my test client.

Ah, the sweet smell of victory