Earlier this week, I attended an Amazon Web Services (AWS) 101 briefing, delivered by Amazon UK’s Ryan Shuttleworth (@RyanAWS). Although I’ve been watching the “Journey into the AWS cloud” series of webcasts too, it was a really worthwhile session and, when the videos are released to the web, well worth watching for an introduction to the AWS cloud.
One thing I particularly appreciate about Ryan’s presentations is that he approaches things from an architectural view. It’s a refreshing change from the evangelists I’ve met at other companies who generally market software by talking about features (maybe even with some design considerations/best practice or coding snippets) but rarely seem to mention reference architectures or architectural patterns.
During his presentation, Ryan presented a reference architecture for utility computing and, even though this version relates to AWS services, it’s a pretty good model for re-use (in fact, the beauty of such a reference architecture is that the contents of each box could be swapped out for other components, without affecting the overall approach – maybe I should revisit this post and slot in the Windows Azure components!).
So, what’s in each of these boxes?
- AWS global infrastructure: consists of regions to collate facilities, with availability zones that are physically separated, and edge locations (e.g. for content distribution).
- Networking: Amazon provides Direct Connect (dedicated connection to AWS cloud) to integrate with existing assets over VPN Connections and Virtual Private Clouds (your own slice of networking inside EC2), together with Route 53 (a highly available and scalable global DNS service).
- Compute: Amazon’s Elastic Compute Cloud (EC2) allows for the creation of instances (Linux or Windows) to use as you like, based on a range of instance types, with different pricing – to scale up and down, even auto-scaling; Elastic Load Balancing allows the distribution of EC2 workloads across instances in multiple availability zones.
- Storage: Simple Storage Service (S3) is the main storage service (Dropbox, Spotify and others runs in this) – designed for write once read many applications. Elastic Block Store (EBS) can be used to provide persistent storage behind an EC2 instance (e.g. boot volume) and supports snapshotting, replicated within an availability zone (so no need to RAID). There’s also Glacier for long term archival of data, AWS Import/Export for bulk uploads/downloads to/from AWS and the AWS Storage Gateway to connect on-premises and cloud-based storage.
- Databases: Amazon’s Relational Database Service (RDS) provides database as a service capabilities (MySQL, Oracle, or Microsoft SQL Server). There’s also DynamoDB – a provisioned throughput NoSQL database for fast, predictable performance (fully distributed and fault tolerant) and SimpleDB for smaller NoSQL datasets.
- Application services: Simple Queue Service (SQS) for reliable, scalable, messages queuing for application decoupling); Simple Workflow Service (SWF) to coordinate processing steps across applications and to integrate AWS and non-AWS resources, to manage distributed states in complex systems; CloudSearch – an elastic search engine based on Amazon’s A9 technology to provide auto-scaling and a sophisticated feature set (equivalent to SOLR); CloudFront for a worldwide content delivery network (CDN), to easily distribute content to end users with a single DNS CNAME.
- Deployment and admin: Elastic Beanstalk allows one click deployment from Eclipse, Visual Studio and Git for rapid deployment of applications with all AWS resources auto-created; CloudFormation is a scripting framework for AWS resource creation that automates stack creation in a repeatable way. There’s also Identity and Access Management (IAM), software development kits, Simple Email Service (SES), Simple Notification Service (SNS), ElastiCache, Elastic MapReduce, and the CloudWatch monitoring framework.
I suppose if I were to re-draw Ryan’s reference architecture, I’d include support (AWS Support) as well some payment/billing services (after all, this doesn’t come for free) and the AWS Marketplace to find and start using software applications on the AWS cloud.
One more point: security and compliance (security and service management are not shown as they are effectively layers that run through all of the components in the architecture) – if you implement this model in the cloud, who is responsible? Well, if you contract with Amazon, they are responsible for the AWS global infrastructure and foundation services (compute, storage, database, networking). Everything on top of that (the customisable parts) are up to the customer to secure. Other providers may take a different approach.
One thought on “[Amazon’s] Reference architecture for utility computing”