144

I'm trying to pull a large file from S3 and write it to RDS using pandas dataframes.

I've been googling this error and haven't seen it anywhere, does anyone know what this extremely generic sounding error could mean? I've encountered memory issues previously but expanding the memory removed that error.

{
  "errorType": "Runtime.ExitError",
  "errorMessage": "RequestId: 99aa9711-ca93-4201-8b1e-73bf31b762a6 Error: Runtime exited with error: signal: killed"
}
2
  • 4
    Can you share your Lambda's logs? You're probably running out of memory again. If you're reading a large file from S3, be sure to read it in reasonably sized chunks and process each chunk before reading the next one. Don't read the entire file into memory at once. Commented Nov 26, 2019 at 19:44
  • thanks you're right, i just expanded my memory to the max and it worked. Commented Nov 26, 2019 at 19:56

8 Answers 8

214

Got the same error when executing the lambda for process an image, Only few results coming when searching in web for this error.

increase the AWS Lambda Memory by 1.5x OR 2x to resolve it. For example increase the memory from 128mb to 512mb.

This runtime error occurs because the lambda function does not execute remaining line of code, moreover it is not possible to catch the error and run the rest of the code.

configuration for AWS lambda memory

Sign up to request clarification or add additional context in comments.

3 Comments

I have got the same error integrating sentry in an API server running express.js, and solved the same way.
Yeah, you would think they could at least give a more meaningful log. Hit the memory limit as well.
Just to expand on this answer (and in reply to @Overcode) - the end line of the log actually specifies the max memory used during the invocation, which can help you diagnose whether the function running out of memory is actually the cause for error: REPORT RequestId: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx Duration: 37091.65 ms Billed Duration: 37092 ms Memory Size: 1769 MB Max Memory Used: 1769 MB
36

You're reaching memory limit due to boto3 parallel uploading of your file. You could increase memory usage for lambda, but it's cheating... you'll just pay more.

Per default, S3 cli downloads files larger than multipart_threshold=8MB with max_concurrency=10 parallel threads. It means it will use 80MB for your data, plus threading overhead.

You could reduce to max_concurrency=2 for example, it would use 16MB and it should fit into your lambda memory.

Please note that this may slightly decrease your downloading performance.

import boto3
from boto3.s3.transfer import TransferConfig

config = TransferConfig(max_concurrency=2)
s3 = boto3.client('s3')
s3.download_file('BUCKET_NAME', 'OBJECT_NAME', 'FILE_NAME', Config=config)

Reference: https://docs.aws.amazon.com/cli/latest/topic/s3-config.html

Comments

8

I would like to go over how to diagnose the issue to make sure this is a memory issue

You can go to the monitoring tab and see if the lambda is maxing out it's memory or not

i.e. this lambda is fineThis lambda is fine

but this lambda isn't This lambda isn't

Another way is to check the logs after each execution you will find a similar line:

REPORT RequestId: 84b3733a-8e1a-4fc8-8bc4-b30f4b9149ba Duration: 39651.34 ms Billed Duration: 39652 ms Memory Size: 512 MB Max Memory Used: 512 MB XRAY TraceId: 1-65bc4cc0-54791eb622434119696221cd SegmentId: 6a9519384aa29822 Sampled: true

You can see Memory Size: 512 MB and Max Memory Used: 512 MB

if it's maxing out then you should definitely increase the memory

We have IaC so we had to change this in 2 places we directly changed it in the console to take effect immediately and also in the code so further deploys have the higher memory.

Do not increase the memory unless you need to so you can save some money.

Comments

7

It is not timing out at 15 minutes, since that would log this error "Task timed out after 901.02 seconds", which the OP did not get. As others have said, he is running out of memory.

Comments

3

First of all, aws-lambda is not meant to do long time heavy operations like pulling large files from S3 and write it to RDS.

This process can take too much time depending upon the file size and data. The maximum execution time of aws-lambda is 15 min. So, whatever task you are doing in your lambda should be completed with in the time limit you provided (Max is 15 min).

With large and heavy processing in lambda you can get out of memory error , out of time error or some times to need to extend your processing power.

The other way of doing such large and heavy processing is using AWS Glue Jobs which is aws managed ETL service.

Comments

3
  • Solution is to increase the AWS Lambda Memory by 1.5x OR 2x,
  • bcoz this runtime error occurs, the lambda function does not execute any other line of code, it is not possible to catch the error and run the rest of the code.
  • this error acts as the signal to the lambda execution environment to terminate the current execution.

Comments

0

I had same issue. Increasing the lambda memory resolved the issue.

Comments

-1

To add, if anyone is using AWS Amplify as was in the project I was working on - there are still Lambda's under the hood, and you can access and configure them directly from the AWS Lambdas console

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.