Yesterday I attended the first of a set of roving sessions from Amazon.com to explain their cloud offering, Amazon Web Services (AWS). I’ve been tinkering with their stuff for a while now, but I was amped to hear a bit more from the horse’s mouth. I went with a couple colleagues and greatly enjoyed the horrific drive to/from Beverly Hills.
The half-day session was keynoted by Amazon.com CTO Dr. Werner Vogels. He explained how Amazon’s cloud offering came to be, and how it helped them do their jobs better. He made a number of good points during his talk:
- We have to think of reality scale vs. an academic or theoretical model of how an application scales. That is, design scalability for real life and your real components and don’t get caught up in academic debates on how a system SHOULD scale.
- He baked a key aspect of SOA down to “the description of a service is enough to consume it.” That’s a neat way to think about services. No libraries required, just standard protocols and a definition file.
- If your IT team has to become experts in base functions like high performance computing in order to solve a business problem, then you’re not doing IT right and you’re wasting time. We need to leverage the core competencies (and offerings) of others.
- Amazon.com noticed that when it takes a long time (even 6 hours) to provision new servers, then people are more hesitant to release the resource when they were done with it. This leads to all sorts of waste and inefficiency, and this behavior can be eliminated by an on-demand, pay-as-you-go cloud model.
- Amazon.com breaks down the application into core features and services to the point that each page leverages up to 300 distinct services. I can’t comprehend that without the aid of alcohol.
- We need to talk to our major software vendors about cloud-driven licensing. Not the vendor’s cloud computing solution, but how I can license their software in a cloud environment where I may temporarily stand up servers. Should I pay a full CPU license for a database if I’m only using it during cloud-bursting scenarios or for short-lived production apps, or should I have a rental license available to me?
Werner gave a number of good case study mentions ranging from startups to established, mature organizations. Everything from eHarmony.com using the parallel processing of Amazon MapReduce to do “profile matching” in the cloud, to a company like SAP putting source code in the cloud in the evenings and having regresssion tests run against it. I was amused by the eHarmony.com example only because I would hate to be the registered member who finally made the company say “look, we simply need 1400 simultaneously running computers to find this gargoyle a decent female companion.”
Representatives from Skifta, eHarmony.com, Reddit, and Geodelic were all on hand to explain how they used the Amazon cloud and what they learned while leveraging it. Good lessons around working around unavoidable latency, distributing data, and sizing on demand.
The session closed with talks from Mike Culver and Steve Riley of AWS. Mike talked about architectural considerations (e.g. design for failure, force loose coupling, design for elasticity, put security everywhere, consider the best storage option) while Steve (a former Microsoftie) talked about security considerations. My boss astutely noticed that most (all?) of Mike’s points should pertain to ANY good architecture, not necessarily just cloud. Steve talked a fair amount about the AWS Virtual Private Cloud which is a pretty slick way to use the Amazon cloud, but put these machines within the boundaries (IP, domain, management) of your own network.
All in all, great use of time. We thought of additional use cases for my own company including proof-of-concept environments, providing temporary web servers for new product launches, or trying to process our mounds of drug research data more efficiently.