10

So my problem is that DynamoDB is taking quite some time to return single object. I'm using node.js and AWS docclient. The weird thing is that it takes from 100ms to 200ms to "select" single item from DB. Is there anyway to make it faster?

Exampel code:

var AWS = require("aws-sdk");
var docClient = new AWS.DynamoDB.DocumentClient();
console.time("user get");
var params = {
      TableName : 'User',
      Key: {
        "id": "2f34rf23-4523452-345234"
      }
    };

    docClient.get(params, function(err, data) {
        if (err) {
            callback(err);
        }
        else {              
            console.timeEnd("user get");
        }
    });

And average for this simple piece of code in lambda is 130ms. Any idea what could I do to make it faster? User table has only Primary partition key "id" and global secondary index with primary key email. When I try this from my console it takes even more time.

Any help will be much appreciated!

2
  • are you running the code from the same region as what your DDB table is in? Commented Feb 2, 2017 at 12:14
  • Yes it is all in the same region. Commented Feb 2, 2017 at 13:11

4 Answers 4

11

I faced exactly the same issue using Lambda@Edge. Responses from DynamoDB took 130-140ms on average while the DynamoDB latency graph shown 10-20ms latency.

I managed to improve response times to ~30ms on average by disabling ssl, parameter validations, and convertResponseTypes:

const docClient = new AWS.DynamoDB.DocumentClient({ 
  apiVersion: '2012-08-10',
  sslEnabled: false,
  paramValidation: false,
  convertResponseTypes: false
});

Most likely the cause of the issue was CPU/Network throttling in the lambda itself. Lambda@Edge for viewer request can have maximum 128MB which is a pretty slow lambda. So disabling extra-checks and SSL validation made things lots faster.

If you are running just a regular Lambda, increasing memory should fix the issue.

Sign up to request clarification or add additional context in comments.

1 Comment

Omg, this reduced my queries from 400-800ms to 15-30ms. I was so upset cause my industry is one of the most fast-paced "every second counts" type spaces and I was thinking I'd have to re-plan the whole project cause of how slow dynamoDB was! Thank you!!!
1

Have you warmed up your Lambda function? If you are only running it ad-hoc, and not running a continuous load, the function might not be available yet on the container running it, so additional time might be taken there. One way to support or refute this theory would be to look at latency metrics for the GetItem API. Finally, you could try using AWS X-Ray to find other spots of latency in your stack.

The DynamoDB SDK could also be retrying, adding to your perceived latency in the Lambda function. Given that your items are around 10 KB, it is possible you are getting throttled. Have you provisioned enough read capacity? You can verify both your read latency and read throttling metrics in the DynamoDB console for your table.

7 Comments

Well even if I run every function like 10 times in a row the result is the same.
How big are your items? You typically will see around 10-15ms latency for reading small items (less than 1kb), and another 10-15ms for lambda JS execution time.
Hmm maybe because my items are quite large? Every item has approximatly 10k characters and size is arround 10Kb. Btw. even if I add ProjectionExpression: "id" it doesn't speed up the whole thing.
Your read latency will be a function of item size. Adding a projection expression will reduce HTTP cost but will not reduce the time to read the item on the service side. I added to my response to consider throttling.
Well in dynamodb console I see 0 trottled requests. So i Guess it isn't it. I will try to reduce the size of my items. Maybe any other idea?
|
1

I know this is a little old, but for anyone finding this question now: the instantiation of the client can be extremely slow. This was despite fast local testing, yet accessing Dynamo DB from the same region and Elastic Beanstalk instance was extremely slow!

Accessing Dynamo from a single client instance improved the speeds significantly.

Comments

1

Reusing the connection helped speed up my calls from ~120ms to ~35ms.

Reusing Connections with Keep-Alive in Node.js

By default, the default Node.js HTTP/HTTPS agent creates a new TCP connection for every new request. To avoid the cost of establishing a new connection, you can reuse an existing connection.

For short-lived operations, such as DynamoDB queries, the latency overhead of setting up a TCP connection might be greater than the operation itself. Additionally, since DynamoDB encryption at rest is integrated with AWS KMS, you may experience latencies from the database having to re-establish new AWS KMS cache entries for each operation.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.