I'm making a lambda function that will generate a pdf file that then needs to be sent to S3.
The file is generated in the lambda function, so I can't use a pre-signed url to upload it from the client since the client does not have the file. The file generated will not be on the range of 512MB (lambda /tmp storage limit) but will be more than 6MB.
I'm not sure yet if i should convert this to a container instead since lambda has a maximum request payload of 6MB.
One idea that came to mind is to use s3 multipart-upload and upload the parts into chunks of 4MB.
Does that actually solve the problem though? Or should i just create a container instead?
Taking into account lambda cost savings having a way to go around the lambda 6MB limit would be very beneficial in my case.
question from:
https://stackoverflow.com/questions/66060449/will-multipart-upload-with-s3-overcome-the-lambda-6mb-request-payload-limit 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…