Last updated 23/07/2021
Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It's a fully managed, multi-region, multi-active, durable database with built-in security, backup and restores, and in-memory caching for internet-scale applications. DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million requests per second.
Many of the world's fastest-growing businesses such as Lyft, Airbnb, and Redfin as well as enterprises such as Samsung, Toyota, and Capital One depend on the scale and performance of DynamoDB to support their mission-critical workloads.
Hundreds of thousands of AWS customers have chosen DynamoDB as their key-value and document database for mobile, web, gaming, ad tech, IoT, and other applications that need low-latency data access at any scale. Create a new table for your application and let DynamoDB handle the rest.
In DynamoDB, tables, items, and attributes are the core components that you work with. A table is a collection of items, and each item is a collection of attributes. DynamoDB uses primary keys to uniquely identify each item in a table and secondary indexes to provide more querying flexibility. You can use DynamoDB Streams to capture data modification events in DynamoDB tables.
Tables, Items, and Attributes
The following are the basic DynamoDB components:
The following diagram shows a table named People with some example items and attributes.
Note the following about the People table:
The following is another example table named Music that you could use to keep track of your music collection.
Note the following about the Music table:
When you create a table, in addition to the table name, you must specify the primary key of the table. The primary key uniquely identifies each item in the table, so that no two items can have the same key.
DynamoDB supports two different kinds of primary keys:
DynamoDB uses the partition key's value as input to an internal hash function. The output from the hash function determines the partition (physical storage internal to DynamoDB) in which the item will be stored.
In a table that has only a partition key, no two items can have the same partition key value.
The People table described in Tables, Items, and Attributes is an example of a table with a simple primary key (PersonID). You can access any item in the People table directly by providing the PersonId value for that item.
DynamoDB uses the partition key value as input to an internal hash function. The output from the hash function determines the partition (physical storage internal to DynamoDB) in which the item will be stored. All items with the same partition key value are stored together, in sorted order by sort key value.
In a table that has a partition key and a sort key, two items can have the same partition key value. However, those two items must have different sort key values.
The Music table described in Tables, Items, and Attributes is an example of a table with a composite primary key (Artist and song title). You can access any item in the Music table directly if you provide the Artist and SongTitle values for that item.
A composite primary key gives you additional flexibility when querying data. For example, if you provide only the value for Artist, DynamoDB retrieves all of the songs by that artist. To retrieve only a subset of songs by a particular artist, you can provide a value for Artist along with a range of values for SongTitle.
You can create one or more secondary indexes on a table. A secondary index lets you query the data in the table using an alternate key, in addition to queries against the primary key. DynamoDB doesn't require that you use indexes, but they give your applications more flexibility when querying your data. After you create a secondary index on a table, you can read data from the index in much the same way as you do from the table.
DynamoDB supports two kinds of indexes:
Each table in DynamoDB has a quota of 20 global secondary indexes (default quota) and 5 local secondary indexes per table.
In the example Music table shown previously, you can query data items by Artist (partition key) or by Artist and SongTitle (partition key and sort key). What if you also wanted to query the data by Genre and AlbumTitle? To do this, you could create an index on Genre and album title, and then query the index in much the same way as you'd query the Music table.
The following diagram shows the example Music table, with a new index called GenreAlbumTitle. In the index, Genre is the partition key and AlbumTitle is the sort key.
Note the following about the GenreAlbumTitle index:
You can query the GenreAlbumTitle index to find all albums of a particular genre (for example, all Rock albums). You can also query the index to find all albums within a particular genre that have certain album titles (for example, all Country albums with titles that start with the letter H).
DynamoDB Streams is an optional feature that captures data modification events in DynamoDB tables. The data about these events appear in the stream in near-real-time, and in the order that the events occurred.
Each event is represented by a stream record. If you enable a stream on a table, DynamoDB Streams writes a stream record whenever one of the following events occurs:
Each stream record also contains the name of the table, the event timestamp, and other metadata. Stream records have a lifetime of 24 hours; after that, they are automatically removed from the stream.
You can use DynamoDB Streams together with AWS Lambda to create a trigger—code that runs automatically whenever an event of interest appears in a stream. For example, consider a Customer table that contains customer information for a company. Suppose that you want to send a "welcome" email to each new customer. You could enable a stream on that table, and then associate the stream with a Lambda function. The Lambda function would run whenever a new stream record appears, but only process new items added to the Customers table. For any item that has an EmailAddress attribute, the Lambda function would invoke Amazon Simple Email Service (Amazon SES) to send an email to that address.
Note
In this example, the last customer, Craig Roe, will not receive an email because he doesn't have an EmailAddress.
In addition to triggers, DynamoDB Streams enables powerful solutions such as data replication within and across AWS Regions, materialized views of data in DynamoDB tables, data analysis using Kinesis materialized views, and much more.
DynamoDB supports some of the world’s largest scale applications by providing consistent, single-digit millisecond response times at any scale. You can build applications with virtually unlimited throughput and storage. DynamoDB global tables replicate your data across multiple AWS Regions to give you fast, local access to data for your globally distributed applications. For use cases that require even faster access with microsecond latency, DynamoDB Accelerator (DAX) provides a fully managed in-memory cache.
DynamoDB is serverless with no servers to provision, patch, or manage and no software to install, maintain or operate. DynamoDB automatically scales tables up and down to adjust for capacity and maintain performance. Availability and fault tolerance are built-in, eliminating the need to architect your applications for these capabilities. DynamoDB provides both provisioned and on-demand capacity modes so that you can optimize costs by specifying capacity per workload or paying for only the resources you consume.
DynamoDB supports ACID transactions to enable you to build business-critical applications at scale. DynamoDB encrypts all data by default and provides fine-grained identity and access control on all your tables. You can create full backups of hundreds of terabytes of data instantly with no performance impact to your tables, and recover to any point in time in the preceding 35 days with no downtime. You also can export your DynamoDB table data to your data lake in Amazon S3 to perform analytics at any scale. DynamoDB is also backed by a service level agreement for guaranteed availability.
Build powerful web applications that automatically scale up and down. You don't need to maintain servers, and your applications have automated high availability.
Use DynamoDB and AWS AppSync to build interactive mobile and web apps with real-time updates, offline data access, and data sync with built-in conflict resolution.
Build flexible and reusable microservices using DynamoDB as a serverless data store for consistent and fast performance.
Companies in the advertising technology (ad tech) vertical use DynamoDB as a key-value store for storing various kinds of marketing data, such as user profiles, user events, clicks, and visited links. Applicable use cases include real-time bidding (RTB), ad targeting, and attribution. These use cases require a high request rate (millions of requests per second), low predictable latency, and reliability. Companies use caching through DynamoDB Accelerator (DAX) when they have high read volumes or need submillisecond read latency. Increasingly, ad tech companies need to deploy their RTB and ad targeting platforms in more than one geographical AWS Region, which requires data replication between Regions.
Companies in the gaming vertical use DynamoDB in all capabilities of game platforms, including game state, player data, session history, and leaderboards. The main benefits that these companies get from DynamoDB are its ability to scale reliably to millions of concurrent users and requests while ensuring consistently low latency measured in single-digit milliseconds. Also, as a fully managed service, DynamoDB has no operational overhead, so game developers can focus on developing their games instead of managing databases. Also, as game developers are increasingly looking to expand from a single AWS Region to multiple AWS Regions, they can rely on DynamoDB global tables for multiple-Region, active-active replication of data.
The Pokémon Company migrated global configuration and time-to-live (TTL) data to Amazon DynamoDB, resulting in a 90 percent reduction in bot login attempts.
Many companies in the retail space use common DynamoDB design patterns to deliver consistently low latency for mission-critical use cases. Being free from scaling concerns and the operational burden is a key competitive advantage and an enabler for high-velocity, extreme-scaled events such as Amazon Prime Day, whose magnitudes are difficult to forecast. Scaling up and down allows these customers to pay only for the capacity they need and keeps precious technical resources focused on innovations rather than operations.
As companies in banking and finance build more cloud-native applications, they seek to use fully managed services to increase agility, reduce time to market, and minimize operational overhead. At the same time, they have to ensure the security, reliability, and high availability of their applications. As these companies expand their existing services that are backed by legacy mainframe systems, they find that legacy systems are unable to meet the scalability demands of their growing user base, new platforms such as mobile applications, and the resulting increases in traffic. To solve this problem, they replicate data from their mainframes to the cloud to offload the traffic.
Topic Related Post
NovelVista Learning Solutions is a professionally managed training organization with specialization in certification courses. The core management team consists of highly qualified professionals with vast industry experience. NovelVista is an Accredited Training Organization (ATO) to conduct all levels of ITIL Courses. We also conduct training on DevOps, AWS Solution Architect associate, Prince2, MSP, CSM, Cloud Computing, Apache Hadoop, Six Sigma, ISO 20000/27000 & Agile Methodologies.
* Your personal details are for internal use only and will remain confidential.
ITIL
Every Weekend |
|
AWS
Every Weekend |
|
DevOps
Every Weekend |
|
PRINCE2
Every Weekend |