26

We have a problem of aggregation queries running long time (couple of minutes).

Collection:

We have a collection of 250 million documents with about 20 fields per document, The total size of the collection is 110GB.

We have indexes over "our_id" and dtKey fields.

Hardware:

Memory:

24GB RAM (6 * 4GB DIMMS 1333 Mhz)

Disk:

Lvm 11TB built from 4 disks of 3TB disks:

  • 600MB/s maximum instantaneous data transfers.

  • 7200 RPM spindle. Average latency = 4.16ms

  • RAID 0

CPU:

2* E5-2420 0 @ 1.90GHz Total of 12 cores with 24 threads. Dell R420.

Problem: We are trying to make an aggregation query of the following:

db.our_collection.aggregate(
    [
        {
            "$match":
            {
                "$and":
                    [
                        {"dtKey":{"$gte":20140916}},
                        {"dtKey":{"$lt":20141217}},
                        {"our_id":"111111111"}
                    ]
            }
        },
        {
            "$project":
            {
                "field1":1,
                "date":1
            }
        },
        {
            "$group":
            {
                "_id":
                {
                    "day":{"$dayOfYear":"$date"},
                    "year":{"$year":"$date"}
                },
                "field1":{"$sum":"$field1"}
            }
        }
    ]
);

This query takes a couple of minutes to run, when it is running we can see the followings:

  • Mongo current operation is yielding more than 300K times
  • On iostat we see ~100% disk utilization

After this query is done it seems to be in cache and this can be done again in a split second,

After running it for 3 – 4 users it seems that the first one is already been swapped out from the cache and the query takes a long time again.

We have tested a count on the matching part and seen that we have users of 50K documents as well as users with 500K documents,

We tried to get only the matching part:

db.pub_stats.aggregate(
    [
        {
            "$match":
            {
                "$and":
                    [
                        {"dtKey":{"$gte":20140916}},
                        {"dtKey":{"$lt":20141217}},
                        {" our_id ":"112162107"}
                    ]
            }
        }
    ]
);

And the queries seems to take approximately 300-500M of memory,

But after running the full query, it seems to take 3.5G of memory.

Questions:

  1. Why the pipelining of the aggregation takes so much memory?
  2. How can we increase our performance for it to run on a reasonable time for HTTP request?

1 Answer 1

27
  1. Why the pipelining of the aggregation takes so much memory?

Just performing a $match won't have to read the actual data, it can be done on the indexes. Through the projection's access of field1, the actual document will have to be read, and it will probably be cached as well.

Also, grouping can be expensive. Normally, it should report an error if your grouping stage requires more than 100M of memory - what version are you using? It requires to scan the entire result set before yielding, and MongoDB will have to at least store a pointer or an index of each element in the groups. I guess the key reason for the memory increase is the former.

  1. How can we increase our performance for it to run on a reasonable time for HTTP request?

Your dtKey appears to encode time, and the grouping is also done based on time. I'd try to exploit that fact - for instance, by precomputing aggregates for each day and our_id combination - makes a lot of sense if there's no more criteria and the data doesn't change much anymore.

Otherwise, I'd try to move the {"our_id":"111111111"} criterion to the first position, because equality should always precede range queries. I guess the query optimizer of the aggregation framework is smart enough, but it's worth a try. Also, you might want to try turning your two indexes into a single compound index { our_id, dtkey }. Index intersections are supported now, but I'm not sure how efficient that is, really. Use the built-in profile and .explain() to analyze your query.

Lastly, MongoDB is designed for write-heavy use and scanning data sets of hundreds of GB from disk in a matter of milliseconds isn't feasible computationally at all. If your dataset is larger than your RAM, you'll be facing massive IO delays on the scale of tens of milliseconds and upwards, tens or hundreds of thousands of times, because of all the required disk operations. Remember that with random access you'll never get even close to the theoretical sequential disk transfer rates. If you can't precompute, I guess you'll need a lot more RAM. Maybe SSDs help, but that is all just guesswork.

Sign up to request clarification or add additional context in comments.

4 Comments

The order of indices matter, so it should be db.pub_stats.ensureIndex({dtkey:1,our_id:1}).
Yes, the order matters, but it should be {our_id:1,dtKey:1}, because our_id is an equality condition and dtKey in a range query. The order in the query should be changed accordingly, that's what I wrote in the answer but maybe that wasn't well put.
@mnemosyn - nice explained
"I guess you'll Need a lot more RAM" won't solve anything, as Mongo has an annoying 100mb limit per aggregation step.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.