3

I am trying to improve speed of simple UPDATE query, but it is taking between 0.7-1.5s for one row, which is too slow.

UPDATE users SET firstname = 'test' WHERE userid=2120;

Here is explain:

Update on users  (cost=0.43..8.45 rows=1 width=331) (actual time=0.068..0.068 rows=0 loops=1)
->  Index Scan using users_pkey on users  (cost=0.43..8.45 rows=1 width=331) (actual time=0.039..0.040 rows=1 loops=1)
    Index Cond: (userid = 2120)
Trigger updatemv: time=727.372 calls=1
Total runtime: 727.487 ms

Total database size is 20GB and about 60 tables. I have problems with table 'users' which has 1.36 million rows. Table 'users' has 36 columns (4 bigint, 5 integers, 10 character varying (from 32 to 255) and other are boolean fields), half of them are null for a lot of rows. Also there are 6 indexes on 'users' table. Database is hosted on Amazon RDS db.m4.2xlarge with 8 vCPU, 32 GB RAM and 100 GB SSD. Version of PostgresSQL is 9.3.

I tried to VACUUM ANALYZE tables and that helped, but it is still too slow.

I read about upgrading RAM/CPU, tuning database in postgresql.conf, creating separated tablespace for big table, etc. But I am not sure what is the best approach for handling big tables with million rows.

With current trend my table will grow to 20 milion rows in next 12 months, so I need durable solution.

Any advice how to improve speed of UPDATE query on big tables are helpful.

6
  • 1
    I'm missing something . . . 20 GB for 1.36 million rows is l5k per row. And only spread among 36 columns? Those are mighty big columns, especially if most are NULL. Commented Mar 27, 2017 at 11:47
  • 6
    Trigger updatemv: time=727.372 calls=1 You have triggers on this table, or other tables with FKs that refer to it? Commented Mar 27, 2017 at 11:50
  • @GordonLinoff Database has about 60 tables with total size of 20GB. I have problem with the biggest table 'users' which has 1.36 milion of rows. I think there are no big columns. It is 4 bigint, 3 integer, 15 character varying (from 32 to 255), 5 timestamps, and boolean fields. Commented Mar 27, 2017 at 12:02
  • Checking the obvious: could there be locks by other modifying transactions? Commented Mar 27, 2017 at 12:08
  • 4
    You seem to have a materialized view which is being updated when this table changes. The update of the view seems to be triggered by a trigger named updatemv. Commented Mar 27, 2017 at 12:08

2 Answers 2

1

Thank you @joop I resolved my problem. I had trigger to refresh materialized view. When I removed it, update query is taking just 0.123 ms instead of 727.487 ms, 6000 times faster.

I have organized materialized view different way.

Sign up to request clarification or add additional context in comments.

Comments

0

Tuning the parameters in postgresql.conf can have a huge impact and it's free, so I would start there. The default values are way too low.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.