I have a few hundred of PDFs in a s3 Bucket and I want a lambda function that creates a zip file for all my PDFs.
Doing this on my local Python is obviously easy enough and I had assumed the logic would transfer over to AWS Lambda in a pretty straight forward way. But so far I haven't managed to get this working.
I have been using the zipfile Python library, as well as boto3. My logic is as simple as finding all the files, appending them to a list of 'files_to_zip' and then iterating through that list writing each one to the new zip file.
This however has kicked up a number of issues and I think this is due to my short falls in understanding how calling and loading files works in Lambda.
Here is the code I have tried so far
import os
import boto3
from io import BytesIO, StringIO
from zipfile import ZipFile, ZIP_DEFLATED
def zipping_files(event, context):
s3 = boto3.resource('s3')
BUCKET = 'BUCKET NAME'
PREFIX_1 = 'KEY NAME'
new_zip = r'NEW KEY NAME'
s3_client = boto3.client('s3')
files_to_zip = []
response = s3_client.list_objects_v2(Bucket=BUCKET, Prefix=PREFIX_1)
all = response['Contents']
for i in all:
files_to_zip.append(str(i['Key']))
with ZipFile(new_zip, 'w', compression=ZIP_DEFLATED, allowZip64=True) as new_zip:
for file in files_to_zip:
new_zip.write(file)
I am getting error messages such as my new_zip string does not exist (FileNotFoundError) and this is a read only action.