Which strategy meets these requirements with the LEAST amount of administrative work?
Use AWS Glue to crawl the data in the DynamoDB table. Create a job using an available blueprint to export the data to Amazon S3. Import the data from the S3 file to a DynamoDB table in the new account.
Create an AWS Lambda function to scan the items of the DynamoDB table in the current account and write to a file in Amazon S3. Create another Lambda function to read the S3 file and restore the items of a DynamoDB table in the new account.
Use AWS Data Pipeline in the current account to export the data from the DynamoDB table to a file in Amazon S3. Use Data Pipeline to import the data from the S3 file to a DynamoDB table in the new account.
Configure Amazon DynamoDB Streams for the DynamoDB table in the current account. Create an AWS Lambda function to read from the stream and write to a file in Amazon S3. Create another Lambda function to read the S3 file and restore the items to a DynamoDB table in the new account.
Explanations:
AWS Glue is primarily designed for ETL processes, but using it for migrating DynamoDB data to S3 and back would involve unnecessary complexity. It is not the simplest option for this use case.
Using Lambda to manually scan and export the data to S3, and then import it back to DynamoDB, is complex and error-prone. It involves custom code that can be difficult to manage at scale.
AWS Data Pipeline is specifically designed for data transfer between AWS services, including DynamoDB. It allows for a streamlined, automated process to export and import data between DynamoDB tables across accounts, minimizing administrative overhead.
Using DynamoDB Streams and Lambda functions to export data to S3 and then back to a new DynamoDB table is overly complex and unnecessary for this migration task. The Streams are primarily used for real-time data replication, not full table migration.