1

In a location based app,there's a specific query which has to run fast:

SELECT count(*) FROM users
  WHERE earth_box(ll_to_earth(40.71427000, -74.00597000), 50000) @> ll_to_earth(latitude, longitude)

However, when after copying the database by using Postgres' tools:

pg_dump dummy_users > dummy_users.dump
createdb slow_db
psql slow_db < dummy_users.dump

the query takes 2.5 seconds instead of 0.5 seconds on slow_db!!

The planner chooses a different route in slow_db, eg Explain analyze on slow_db:

"Aggregate  (cost=10825.18..10825.19 rows=1 width=8) (actual time=2164.396..2164.396 rows=1 loops=1)"
"  ->  Bitmap Heap Scan on users  (cost=205.45..10818.39 rows=2714 width=0) (actual time=26.188..2155.680 rows=122836 loops=1)"
"        Recheck Cond: ('(1281995.9045467733, -4697354.822067326, 4110397.4955141144),(1381995.648489849, -4597355.078124251, 4210397.23945719)'::cube @> (ll_to_earth(latitude, longitude))::cube)"
"        Rows Removed by Index Recheck: 364502"
"        Heap Blocks: exact=57514 lossy=33728"
"        ->  Bitmap Index Scan on distance_index  (cost=0.00..204.77 rows=2714 width=0) (actual time=20.068..20.068 rows=122836 loops=1)"
"              Index Cond: ((ll_to_earth(latitude, longitude))::cube <@ '(1281995.9045467733, -4697354.822067326, 4110397.4955141144),(1381995.648489849, -4597355.078124251, 4210397.23945719)'::cube)"
"Planning Time: 1.002 ms"
"Execution Time: 2164.807 ms"

explain analyze on the origin db:

"Aggregate  (cost=8807.01..8807.02 rows=1 width=8) (actual time=239.524..239.525 rows=1 loops=1)"
"  ->  Index Scan using distance_index on users  (cost=0.41..8801.69 rows=2130 width=0) (actual time=0.156..233.760 rows=122836 loops=1)"
"        Index Cond: ((ll_to_earth(latitude, longitude))::cube <@ '(1281995.9045467733, -4697354.822067326, 4110397.4955141144),(1381995.648489849, -4597355.078124251, 4210397.23945719)'::cube)"
"Planning Time: 3.928 ms"
"Execution Time: 239.546 ms"

For both tables there's an index on the location which was created in the exact same way:

CREATE INDEX 
   distance_index ON users USING gist (ll_to_earth(latitude, longitude))

I've tried to run maintenance tools (analyze\vaccum etc) before and after running that query, with or without the index, doesn't help!

Both DBS run on the exact same machine (so same postgres server,postgres dist,configuration). Data on both DBS is the same (one single table), and isn't changing. The Postgres version = 12.8.

psql's \l output for those databases:

                              List of databases
    Name     |  Owner   | Encoding | Collate | Ctype |   Access privileges   
-------------+----------+----------+---------+-------+----------------------- 
 dummy_users | yoni     | UTF8     | en_IL   | en_IL | 
 slow_db     | yoni     | UTF8     | en_IL   | en_IL | 

What is going on?

(Thanks to Laurenz Albe) - after SET enable_bitmapscan = off; and SET enable_seqscan = off; on the slow database, ran the query again here is the EXPLAIN (ANALYZE, BUFFERS) output:

"Aggregate  (cost=11018.63..11018.64 rows=1 width=8) (actual time=213.544..213.545 rows=1 loops=1)"
"  Buffers: shared hit=11667 read=110537"
"  ->  Index Scan using distance_index on users  (cost=0.41..11011.86 rows=2711 width=0) (actual time=0.262..207.164 rows=122836 loops=1)"
"        Index Cond: ((ll_to_earth(latitude, longitude))::cube <@ '(1282077.0159892815, -4697331.573647572, 4110397.4955141144),(1382076.7599323571, -4597331.829704497, 4210397.23945719)'::cube)"
"        Buffers: shared hit=11667 read=110537"
"Planning Time: 0.940 ms"
"Execution Time: 213.591 ms"
12
  • 1
    Postgres versions for the database's? Your results are from EXPLAIN not EXPLAIN ANALYZE as there are no actual_time data. Are you sure you ran ANALYZE on the new database? If so, what was the actual command? Commented Jul 5, 2022 at 23:35
  • @AdrianKlaver PostgreSQL 12.8 (Ubuntu 12.8-0ubuntu0.20.04.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit Commented Jul 5, 2022 at 23:40
  • @AdrianKlaver I am sure that i ran analyze on the new database, using pgadmin4, right clicked on slow_db -> maintenance and from there I tried everything (analyze, vaccum etc). That being said, I can run the commands manually as well if that could help. Is that the case? Commented Jul 5, 2022 at 23:48
  • I do not trust GUI front ends to do what you use ask. Use psql and connect to the database and run ANALYZE. Commented Jul 5, 2022 at 23:50
  • 2
    Can you SET enable_bitmapscan = off; and SET enable_seqscan = off; on the slow database, run the query again and post the EXPLAIN (ANALYZE, BUFFERS) output you get? Commented Jul 6, 2022 at 2:14

1 Answer 1

3

Manual VACUUM / ANALYZE after restore

After restoring a new database, there are no column statistics yet. Normally, autovacuum will kick in eventually, but since "data [...] isn't changing", autovacuum wouldn't be triggered.

For the same reason (data isn't changing), I suggest to run this once after restoring your single table:

VACUUM (ANALYZE, FREEZE) users;

You might as well run FREEZE for a table that's never changed. (FULL isn't necessary, since there are no dead tuples in a freshly restored table.)

Explanation for the plan change

With everything else being equal, I suspect at least two major problems:

  1. Bad column statistics
  2. Bad database configuration (the more severe problem)

See:

In the slow DB, Postgres expects rows=2714, while it expects rows=2130 in the fast one. The difference may not seem huge, but may well be enough to tip Postgres over to the other query plan (that turns out to be inferior).

Seeing that Postgres actually finds rows=122836, either estimate is bad. The one in the slow DB is actually less bad. But the bitmap scan turns out to be slower than the index scan, even with many more qualifying rows than expected. (!) So your database configuration is most probably way off. The main problem typically is the default random_page_cost of 4, while a realistic setting for fully cached read-only table is much closer to 1. Maybe 1.1 to allow for some additional cost. There are a couple other settings that encourage index scans. Like effective_cache_size. Start here:

Estimates are just that: estimates. And column statistics are also just that: statistics. So not exact but subject to random variation. You might increase the statistics target to increase the validity of column statistics.

Cheap random reads favor index scans and discourage bitmap index scans.
More qualifying rows favor a bitmap index scan. Less favor an index scan. See:

Sign up to request clarification or add additional context in comments.

6 Comments

Appreciate the detailed answer, however- after turning back on enable_seqscan and enable_bitmapscan, and after running VACUUM (ANALYZE, FREEZE) users, the critical query timing timing went back to 2.5 seconds.
@yonikeren: Did you also tune the database configuration like suggested?
Sorry, after set random_page_cost=1.1; the timing went down to 0.5s which is alright.
@yonikeren: I would not go there. Too ugly. And typically not necessary. The default setting of 4.0 for random_page_cost is only appropriate for infrequently used databases or a serious lack of RAM in combination with anachronistically slow storage, so basically never nowadays. Writes to the table are only a secondary factor. As long as your table can typically remain cached, 1.0 is the realistic setting. 1.1 for some leeway.
BTW, Buffers: shared hit=11667 read=110537 means you ran the query on mostly cold cache. Repeated calls should get much faster, yet, with warm cache - assuming reasonable RAM and settings.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.