Third Eye Blind “Ursa Major” Ships/Big Amazon Sale

Finally.

After months (and months and months) of waiting, Third Eye Blind has shipped “Ursa Major”. It was years in the making; we even had to live through a greatest hits package.

I started getting excited about the release when 3EB posted the “Red Start” EP on iTunes. The live version of “Why Can’t You Be” was awesome (and honestly, I was a little disappointed in the studio version). I’ve had Ursa Major on “repeat” all day and can say I am definitely getting into the disc.

Note that Amazon has it on sale (mp3 version) for $3.99 for a limited time.

What is “Cloud Computing” and What is the Future Valuation?

 

 

How *do* you define “cloud computing? Interesting article that examines how different research organizations are defining the term.

Gartner says:

a style of computing where scalable and elastic IT capabilities are provided as a service to multiple customers using Internet technologies

While a UC Berkeley paper offers:

Cloud Computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the data centers that provide those services. The services themselves have long been referred to as Software as a Service (SaaS), so we use that term. The data-center hardware and software is what we will call a Cloud. When a Cloud is made available in a pay-as-you-go manner to the public, we call it a Public Cloud; the service being sold is Utility Computing

People in the industry don’t necessarily agree with such a broad view of the cloud:

If I define the cloud the way Gartner does, I could conceivably consider any Internet-delivered service as a cloud service," Treadway said. "That's not a helpful definition from the standpoint of the massive shift that's going to happen over the next 10 years in computing architecture. Gartner is diluting the term and making its figures irrelevant. Other experts don't defend Gartner's definition. Gartner is at odds with the industry

Gartner forecasts the cloud computing valuation at:

  • $46b in 2008
  • $150b in 2013

While IDC says the cloud will be worth:

  • $42b in 2012

Merrill Lynch says it will be:

  • $160b by 2011

My take: really hard to say (how’s that for helpful?)

As an entrepreneur, “cloud computing” really equates to utility/on-demand computing. The ability to provision virtual instances on the fly and scale as demand/traffic requires. Throwing out the old physical data center model and all the fixed costs that go with it, including hardware depreciation.

As a user though, it is really about the applications I use on a regular basis that exist in “the cloud”. Things like gDocs, Mint, Twitter, Facebook (and of course, Backpack).

In the end I suspect that the larger view will be used, if only because it makes the overall (revenue) numbers so much larger. Like anything, the potential will be overvalued in the near-term and undervalued in the long-term. Kinda like the web was back in the Web 1.0 days.

Now, who is going to acquire Amazon (AMZN) for their AWS technology??

Rave Run – Strawberry, CA

I spent last week in Strawberry, CA. In preparation for the start of training for the San Antonio Rock and Roll Marathon, we made sure to get in as many runs as possible during our stay.

Having run around Pinecrest Lake on previous trips, I became determined to figure out how to run from Strawberry, up to the dam, around the lake and back. We asked around, looked at topo maps, etc. The Forest Service wasn’t much help – the trail head was about a mile from their office, but they didn’t know anything about the place. Talking to some locals helped; we were able to at least find the trail head.

Our first attempt was pretty much a disaster. We missed the initial turn and ran to the end of a logging road. While trying to figure out where to go next, an employee of the local water company drove up. He hadn’t ever been on the trail either (??!!) but called his office and told us we were in the wrong place. We went back down to the highway and up another trail that he assured us “went right to the base of the dam”. While it is possible that the trail did indeed get there 100 years ago, we spent a lot of time trying to find the trail as it went missing every couple of hundred yards. (Not naming names, but one of the people in the group was carrying a sharp stick and was sure we were going to be attacked by a mountain lion or possibly a bear at every turn) After quite a bit of cross-country hiking/running, we crossed the river and found the trail. We made it to the top of the dam and promptly turned around as we had blown through our allotted running time.

The second attempt was a success! We found the trail and had a great run. Our course was as follows:

  1. Drive to start: Hwy 108, turn at the Strawberry Inn. Follow the road (Herring Creek) until it dead-ends.
  2. Run approximately .25 miles, turning right at the first fork. Make sure to pay close attention as this is very easy to miss.
  3. Cross the creek (Herring Creek) next to the washed-out cement bridge.
  4. Look closely for the trail, and follow it to the base of the dam. Note the signs 20 feet off the ground that say something like “Listen for horns. If heard, immediately seek higher ground as there is a water release from the dam”. Not sure how high you need to climb if the horn blows (~1 mile)
  5. Head up the switchbacks to the dam. At the top, go left (yes, up more) and run counter-clockwise around the lake (~4 miles). Note that some stretches are tough to navigate; watch for sprained ankles.
  6. If desired, head over to the store and refuel when you get to the docks.
  7. Follow the trail over the dam and back down through the valley to the start. (~1 mile)
  8. As an add-on, you can continue on Old Strawberry Road (as shown below)

Run Summary:
Start: Herring Creek Road, behind Strawberry Inn
Distance: 6.2 miles
Altitude: 5124 feet (min), 5933 feet (max)
Course: Very scenic, varying conditions
Support: No water, store at ~4.5 miles
Special Note: Listen for the horn!

The map from my Garmin is below:

Legalities of the Cloud

I finally finished “Cloud Application Architectures” (the Tour de France has been a real distraction of late – go Lance!), which is a great overview of cloud computing in general and utilizing AWS specifically.

One issue the author George Reese raises in the book concerns potential legal issues/concerns when your bits are cloud-based. For example, your virtual host is running on the same physical machine as another company. That company turns out to be under investigation for some shady dealings. Law enforcement officials in turn confiscate the physical hardware to prosecute the offenders, thereby taking your site down, along with your bits, including customer data. There are a number of other examples in the book, along with suggested ways to keep your data safe (encrypted file systems, etc)

When reading the book, I thought some of these ideas might be a bit outlandish. Until I read an article on CNet today, entitled “Lawyers shine light on real cloud concerns”. James Urquhart covers some of the same ground as George Reese, while adding in some additional topics/questions like this post from an employment law attorney:

From an employment law perspective, I have not seen much, if anything on the subject. For example, Connecticut's wage and hour laws require employers to keep track of various records of the employee including hours worked, etc. The catch? Such records need to be kept at the employer's place of business for three years. Does storing the information in "the cloud" satisfy that?

Good food for thought; obviously the legal system won’t catch up to the realities of the cloud for quite some time so it pays to delve into something you might not look into normally when deploying a physical production instance at a data center.

Lance Armstrong Rocks

The Tour De France started on 7/4 – not sure if you are watching it, but already it has been great. The “old guy” Lance Armstrong is attempting a comeback and if Stage 3 is anything like the rest of the race, he just might do it.

Great tweet from him this morning, probably made the guys in Iron Maiden smile.

AWS: Pushing Dev/Test Environments in the Cloud

The AWS Start-up Tour, combined with a few books I’ve been reading lately (like the O’Reilly title “Cloud Application Architectures”) have made me re-think/see additional uses for cloud computing. Specifically, there can be a significant cost-savings when moving Dev/Test environments into the cloud.

(I’ll talk about the advantages of being able to quickly build/tear-down environments using images (AMIs) and EBS snapshots of a database in a different post.)

Using a hypothetical example, let’s say your startup/team has several test environments: Test, Stress, Stage and Production. The architecture is cloud-friendly enough to be able to think about moving at least your Test environment into the cloud. The question is, should you?

Hardware Requirements/Costs

The physical machine layout for Test might consist of the following:

Server type Cost Qty Total
Web  $3,000 1 $3,000
Application $4,000 2 $8,000
Database $4,000 2 $8,000
Cache $4,000 2 $8,000
    Total $27,000

(I’m excluding costs like bandwidth, rack space, hardware depreciation etc for simplicity. Including them only makes the case more apparent)

 

The initial costs come to $27,000. Note that it is definitely possible to incorporate cheaper hardware (and these are only estimates, YMMV). In my current environment, these costs would be significantly on the low-end.

Release Cycle

Following a basic “agile”/deliver-often methodology, the project plan (you do have one, right?) might roughly look something like this:

Week(s) Purpose
1 Planning
2-5 Coding
6-7 Test
8 Deployment

In the above example, Test consumes approximately 25% of the release cycle. (If the cycle time goes to 4 weeks instead of eight, the ratio stays the same). In other words, 75% of the time, the hardware is not utilized.

Cloud Hardware Requirements/Costs

Let’s assume that the Test environment is used 20 hours/day, 5 days/week during the Test phase of the release cycle (20 hours x 10 days or 200 hours). Note that we only pay when we have instances running. We can tear them down and not pay for the overnight/weekend hours. If we build the same infrastructure in AWS, we might see something roughly like this:

Server type Cost/hour Qty Total
Web $0.10 1 200 x $0.10 = $20.00
Application $0.20 2 2 x 200 x $0.20 = $80.00
Database $0.40 2 2 x 200 x $0.40 = $160.00
Cache $0.20 2 2 x 200 x $0.20 = $80.00
    Total $360.00

 

 

 

 

 

 

 

 

 

Using AWS, the Test cycle would cost approximately $360 per release. There would be some initial ramp-up costs as the team learned the ins/outs of AWS.

If we focus solely on the hardware costs between cloud utilization vs physical utilization, we get a one-time cost of $27,000 (assuming no service/maintenance contracts, etc) vs a per-release cost of $360. Said another way, it would take 75 releases tested in the cloud to equal the cost of the initial hardware outlay.

I realize that some points of my hypothetical scenario are missing – however, I think this illustrates the significant cost savings that can be achieved by moving Test and Dev environments to the cloud. From my current vantage point, this is truly compelling.

Pretty much a no-brainer.

AWS Management Console: Support for CloudFront

Over on the Amazon Web Services Blog they’re announcing support for CloudFront in the AWS Management Console app. Great summary as well:

You can start distributing your content in minutes. You don't need to make a long term commitment and you don't need to download a client application. It is now even easier to access CloudFront in pay-as-you-go fashion.

I’m not trying to become an AWS fanboy, but these guys are on fire lately. Might be time to sell my Akamai (AKAM) shares.

Also, makes wonder if articles like this will end up being wrong in the end…

Maui: Kaanapali 10k Run

We spent the last week in Kaanapali and were able to get in some decent runs. The 10k below had some good hills (my Garmin says it was 1377 feet of incline), great scenery and was runnable at 5:30am HST.

Course directions

Starting out on Nohea Kai Drive:

  • Head north on Kaanapali Parkway
  • Turn right on Kekaa Drive, heading up the hill past the golf course
  • At the top of the hill, take an immediate right onto Kualapa Loop (more uphill)
  • Turn left at the 2nd stop sign (Puu Anoano Street). Look down the hill for a great view of the ocean and Lanai.
  • Turn left on Puukoii Road and head the down the hill and cross Hwy 30 at the light.
  • Turn left on Kai Ala Drive, run to the end of the court. Follow the “Beachfront This Way” signs (this is approximately Mile 4). This part is a little tricky, so stay on your toes.
  • Follow Kekaa Drive to the bottom of the golf course, turning left on Kaanapali Parkway
  • Just before the end of the street, look for the “Shoreline Access” sign/path. Follow this to the beach.
  • Take the beach path until it ends. Cross the bridge, past the cemetery and continue on to Hanakaoo Beach Park (aka Canoe Beach). Take a quick breather as this is Mile 5.
  • Turn around, run back along the path to Nohea Kai Drive.

Staying cool/hydrated is sometimes a challenge for me in humid environments (the Bay Area has very low humidity). I find my fluid intake is about 50% higher when running here vs. running at home. Also, cooling down takes longer; make sure to allow time to walk at least 5 minutes after completing the run.

10k Run on Catalina Island

Last weekend we ran a 10k on Catalina Island. Having never been there before, I headed over to the USATF website and found a straight-forward 10k.

Starting out in Avalon at the water’s edge, we ran north encountering a decent set of hills in a canyon (this was two weeks after the Newport Marathon, so my quads were quite vocal). At mile 2, we returned to town and headed west, past the (tiny) golf course and up the hill to the Botanical Gardens. Turning around at 4.5m, we again headed back to town and attempted to run out to Pebbly Beach. Unfortunately, due to a rock slide, the road was closed. We turned around, headed back to town and straight to Von’s for Gatorade.

After a quick cool down, we hit Jim’s for breakfast. Great day, I highly recommend it.

AWS Start-up Tour: Cloud Computing with Amazon

Yesterday (6/16/2009) I attended the Amazon “AWS Start-up Tour 2009” in Sunnyvale at the PlugandPlay Tech Center (which reminded me of a larger version of the The Enterprise Network (TEN) Incubator we were in when we started iPrint)

Update: Slides are here

The event centered around cloud computing and how to utilize the various components of AWS. We’ve been running top3Clicks on AWS (EC2/S3) for almost a year and a half so I was pretty familiar with the basic offerings.

Things were kicked-off by Andy Jassy who runs the AWS business (and authored the initial business plan). He gave a good overview of all that AWS has to offer, including their new offerings around scaling, CDN and monitoring.

The best part of the event were the case studies of several companies utilizing AWS for some or all of their production infrastructure:

  • Paco Nathan, Principal Scientist, ShareThis
  • Ljubomir Buturovic, Ph.D., Sr. Director and Chief Scientist, Pathwork
  • Santosh Rau, Engineering Manager, Software Infrastructure, Netflix
  • Andrew Gibbons, Director of Operations, Smugmug

ShareThis and Smugmug are running significant portions of their infrastructure on AWS and both are looking at how to increase their utilization of the platform. Pathwork and ShareThis were interesting in that both had batch scenarios where they needed to provision a large number of machines (more than 1000) for a period of time. Once the batch jobs were complete, they would tear down the instances, thereby reducing both their costs and need for hardware.

At least one company was also using AWS to fire up Dev/Test instances on-demand, run scenarios and then tear down the environment. Given that we’ve spent a lot of effort on something quite similar (using Hyper-V for virtualization of test instances) on my current team, I found it extremely interesting how quick/easy this could be accomplished. There wasn’t any detail, but one scenario might go something like:

  1. Create daily/nightly RC build
  2. Fire up EC2 instance(s) that replicate enough of the production environment to verify the RC
  3. Install/deploy nightly RC to cloud
  4. Run automated functional suite of tests
  5. Generate reports
  6. Tear down environment

No need to have a farm of boxes sitting around that are used for a few hours a day, taking up space in a lab or datacenter. Nice!

The final presentation was done by Jinesh Varia, Technical Evangelist for AWS. He walked through how to take an existing classic three-tier infrastructure and port it to AWS. He pushed the Presentation (html, images, etc) to S3, the Application layer to EC2 and the Data tier to SQS. It was a slightly simplistic view of the world (he only had 45 minutes) but left us able to extrapolate more complex cloud-based architectures.

Overall, a great overview of one company’s view of cloud computing. I left wondering again when Amazon (AMZN) will be acquired, not for their retail operation, but as an infrastructure play. Guessing it will be IBM and it will happen in the next year or so, especially given that Sun (SUNW) is going to Oracle (ORCL).