Is there a way to have the dynamodb rows for each user, backed up in s3 with a csv file.
Then using streams, when a row is mutated, change that row in s3 in the csv file.
The csv readers that are currently out there are geared towards parsing the csv for use within the lambda.
Whereas I would like to find a specific row, given by the stream and then replace it with another row without having to load the whole file into memory as it may be quite big. The reason I would like a backup on s3, is because in the future I will need to do batch processing on it and reading 300k files from dynamo within a short period of time, is not preferable.
boto3documentation for S3putorupload_fileobj