DynamoDB is a popular choice for various applications but faces limitations in storing large objects (item size limit = 400KB). Since costs are based on the amount of data read or written per second, large objects can strain these limits. Here’s a 1-minute rundown of strategies to tackle this:

  1. Compress Large Objects: Shrink the size of your objects using algorithms like Gzip before storing them in DynamoDB. Adds complexity and limits querying capabilities.

  2. Vertical Sharding: Split an object into multiple parts or shards and store them in separate rows. This method is efficient for large items with different access patterns for their attributes. Group related attributes together and create a shard key combining the primary key with a shard identifier. Although efficient, it adds complexity, especially when reassembling shards and maintaining consistency.

  3. Using Amazon S3 for Storage: For very large or infrequently accessed objects, store them in Amazon S3 and keep a reference to them in DynamoDB. This offloads the storage burden to S3, which is cost-effective and scalable. However, this method will increase latency and complexity since you’re using two different services.

Real-life applications include strategies such as storing compressed text documents in DynamoDB, using S3 for high-resolution images and video files, and employing vertical sharding for complex JSON objects.