We want you to think big, to dream big dreams, and to envision (and then build) data-intensive applications that can scale from zero users up to tens or hundreds of millions of users before you know it. We want you to succeed, and we don’t want your database to get in the way. Focus on your app and on building a user base, and leave the driving to us.
Six years later, DynamoDB handles trillions of requests per day, and is the NoSQL database of choice for more than 100,000 AWS customers.
Every so often I like to take a look back and summarize some of our most recent launches. I want to make sure that you don’t miss something of importance due to our ever-quickening pace of innovation, and I also like to put the individual releases into a larger context.
For the Enterprise
Many of our recent DynamoDB launches have been driven by the needs of our enterprise customers. For example:
Global Tables – Announced last November, global tables exist in two or more AWS Regions, with fast automated replication across Regions.
Encryption – Announced in February, tables can be encrypted at rest with no overhead.
Point-in-Time Recovery – Announced in March, continuous backups support the ability to restore a table to a prior state with a resolution of one second, going up to 35 days into the past.
DynamoDB Service Level Agreement – Announced in June, the SLA defines availability expectations for DynamoDB tables.
Adaptive Capacity – Though not a new feature, a popular recent blog post explained how DynamoDB automatically adapts to changing access patterns.
Let’s review each of these important features. Even though I have flagged them as being of particular value to enterprises, I am confident that all DynamoDB users will find them valuable.
Even though I try not to play favorites when it comes to services or features, I have to admit that I really like this one. It allows you to create tables that are automatically replicated across two or more AWS Regions, with full support for multi-master writes, all with a couple of clicks. You get an additional level of redundancy (tables are also replicated across three Availability Zones in each region) and fast read/write performance that can scale to meet the needs of the most demanding global apps.
Global tables can be used in nine AWS Regions (we recently added support for three more) and can be set up when you create the table:
To learn more, read Amazon DynamoDB Update – Global Tables and On-Demand Backup.
Our customers store sensitive data in DynamoDB and need to protect it in order to achieve their compliance objectives. The encryption at rest feature protects data stored in tables, local secondary indexes, and global secondary indexes using AES-256. The encryption adds no storage overhead, is completely transparent, and does not affect latency. It can be enabled with one click when you create a new table:
To learn more, read New – Encryption at Rest for DynamoDB.
Even when you take every possible operational precaution, you may still do something regrettable to your production database. When (not if) that happens. you can use the DynamoDB point-in-time recovery feature to turn back time, restoring the database to its state as of up to 35 days earlier. Assuming that you enabled continuous backups for the table, restoration is as simple as choosing the desired point in time:
To learn more, read New – Amazon DynamoDB Continuous Backups and Point-in-Time Recovery (PITR).
Service Level Agreement
If you are building your applications on DynamoDB and relying on it to store your mission-critical data, you need to know what kind of availability to expect. The DynamoDB Service Level Agreement (SLA) promises 99.99% availability for tables in a single region and 99.999% availability for global tables, within a monthly billing cycle. The SLA provides service credits if the availability promise is not met.
DynamoDB does a lot of work behind the scenes to adapt to varying workloads. For example, as your workload scales and evolves, DynamoDB automatically reshards and dynamically redistributes data between multiple storage partitions in response to changes in read throughput, write throughput, and storage.
Also, DynamoDB uses an adaptive capacity mechanism to address situations where the distribution of data across the storage partitions of a table has become somewhat uneven. This mechanism allows one partition to consume more than its fair share of the overall provisioned capacity for the table for as long as necessary, as long as the overall use of provisioned capacity remains within bounds. With change, the advice that we gave in the past regarding key distribution is not nearly as important.
To learn more about this feature and to see how it can help to compensate for surprising or unusual access patterns to your DynamoDB tables, read How Amazon DynamoDB adaptive capacity accommodates uneven data access patterns.
And There You Go
I hope that you have enjoyed this quick look at some of the most recent enterprise-style features for DynamoDB. We’ve got more on the way, so stay tuned for future updates.
PS – Last week we released a DynamoDB local Docker image that you can use in your containerized development environment and for CI testing.