There are numerous options available for data storage available on Windows Azure and it can be very difficult to pick the right one for a given application profile.
This session will evaluate many of the various storage options available to the Azure developer in terms of their: – Ease of use – Real-world performance – Cost – Features The session will also explore the benefits of tiered storage and review patterns that the developer can use to get the most out of a few key storage options.
Speaker Richard Laxton
Disclaimer: These are conference session notes I compiled during various sessions at Microsoft Tech Ed 2012, September 11-14, 2012. The majority of the content comprises notes taken from the presentation slides accompanied, occasionally, by my own narration. Some of the content may be free hand style. Enjoy… Rob
This session is around deciding on a storage approach for Azure applications. On the agenda – looking at kinds of storage available, a look at specific technologies, performance (in brief) and processing data.
How do we look at storage?
What does the data look like – relational (SQL)? structured? unstructured (file system)? What is the lifecycle – permanent/transient?
API? Open or proprietary interface, language API? What kind of access mechanism do you require? Random or sequential access?
How will it scale?
Level? Horizontal or vertical scale? Ease of implementation (and testing?) What kind of performance needs to be met?
Size based (capacity) or transactions (bandwidth)?
Additional considerations – schema management, transactional support… management and monitoring.. access controls? auditing?
Record type/size/total size/recovery?
Azure Storage Options
1. SQL Azure
Ticks a lot of the boxes. Structured, API access, random access with good scalability but manual horizontal and vertical scaling. Cost is by price.
Throttling can be inconsistent, backups don’t work the same way as SQL Server, threshold under load can be unpredictable and the feature set is not identical to SQL Server. Need to actively police retry attempts and manage outages.
2. Table storage (NoSQL)
Structured, permanent, web service API (plus managed APIs – many languages), random access, programmatic vertical and horizontal scale (easy to arrange data). Need to understand how to design for it. No relationships (key and column based). Data is flat, like an index able CSV file. Cost based on size/transaction.
Identifying sets are table name, partition key and row key (typed columns, very flat). No secondary indexes.
Records limited to around 1MB with single columns at 64kb each. Limited data types (mainly primitives). Data modelling could be difficult compared to relational modelling. How is a domain model mapped into rows and columns, with no relationships?
3. Blob Storage
Unstructured, permanent, random access, web service API, a bunch of data with metadata, automatic scaling (H & V) and cost is based on size and transactions. Could store, for example, a serialized object into BLOB storage.
Two types: Block Blobs & Page blobs.
Block: Up to 200 GB, sequential write, not easy to update.
Page: Up to 1 TB, individually addressable 512k blocks.
Distributing workload, unstructured, permanent, sequential ordered access (FIFO), web service based, managed API and cost based on size/transactions. Great failure recovery support. Separate dequeue and delete operations, Failure to delete will see the message return. Multiple readers/writers, cost based on transactions.
No notification mechanism, but supports polling. Be wary of over used polling (cost/transaction based). Not brilliant performance (especially enqueuing), small messages (~64kb). Can dequeue 32 messages at a time. Not guaranteed FIFO behaviour. Larger message needs a pointer to a blob.
Transient, unstructured (key/value store), web service API, .Net SDK, auto or manual scale, shared and dedicated available, cost based on size (128mb-4gb). Equivalent of memcache (Java).
Stored in-memory, local in—memory if required, distributed notification model (to invalidate local copies), automatically purges if reach quota, limitations on bandwidth and connections.
6. Content Delivery Network (CDN)
Geo distributed cache, used by Akamai, not strictly part of Azure. Only way to deliver to a targeted geography. Mirrors HTTP(S) content, control availability by HTTP headers, generally cheaper than delivering content through Azure. Microsoft CDN is perhaps easier to use. Can connect BLOB storage to CDN. Transient storage.
7. Apache Hadoop
Java based distributed processing of large data sets. Highly scalable, Option to deploy a Hadoop cluster from Azure. Reliable computation, suitable for structured and unstructured data. Storage and processing capability.
8. Virtual Machine
Install anything you need (MySQL, memcache, Oracle?). Why would you resort to a VM? Any legacy or dependencies or use of MySQL (and others). Possibly not the right approach for new green field apps.
Emulator environments not reliable measure of performance. Test on real Azure.
Platform is dynamic, perform additional testing. Make sure high volume situations are tested. Test beyond read/write scenarios. Test common scenarios and keep an eye on edge case scenarios.
A sample test plan: Build a simple application, control API, use multiple workers and test at different levels of load. e Stress testing the platform is OK.
Test small/large objects
Results… mid sized batches – table storage seems to be the winner. Depends on your own specific application, so should profile multiple storage options. SQL needs to be designed with sharding or caching to keep load manageable.
Patterns for Performance
Tiered storage, output caching, queued updates.. Use the right storage for the right data..
[Local Cache] (Transient)
[SQL Azure] (Structured)
Architecturally challenging – performance, instrumentation, development support.. etc Table storage allows for denormalization – multiple copies.. queued updates can help. Output caching can be of benefit – cache at the presentation tier, generated JSON/etc, locally (IIS) but beware stale data. CDN caching edge caching returned reduce load on servers, geographically targeted content.
How to choose?
Understand your design needs, resource and environment. Ensure design is proportional to scale needs. Determine data complexity needs/requirements.
Line of business – transactional, small user base (<100), developers generally SQL-experienced, hybrid applications (online/offline) and very complex data. Uses SQL Azure and Azure Cache for acceleration (simple – keep transactions/size down = lower cost).
Internet scale application – No transactions, read optimized, data partitionable, often simple data model. Use Azure Table Storage, Apache Hadoop for analysis, Azure Cache for acceleration.
No single answer. Consider all options, can use more than one strategy.