I am trying to improve speed of simple UPDATE query, but it is taking between 0.7-1.5s for one row, which is too slow.
UPDATE users SET firstname = 'test' WHERE userid=2120;
Here is explain:
Update on users (cost=0.43..8.45 rows=1 width=331) (actual time=0.068..0.068 rows=0 loops=1)
-> Index Scan using users_pkey on users (cost=0.43..8.45 rows=1 width=331) (actual time=0.039..0.040 rows=1 loops=1)
Index Cond: (userid = 2120)
Trigger updatemv: time=727.372 calls=1
Total runtime: 727.487 ms
Total database size is 20GB and about 60 tables. I have problems with table 'users' which has 1.36 million rows. Table 'users' has 36 columns (4 bigint, 5 integers, 10 character varying (from 32 to 255) and other are boolean fields), half of them are null for a lot of rows. Also there are 6 indexes on 'users' table. Database is hosted on Amazon RDS db.m4.2xlarge with 8 vCPU, 32 GB RAM and 100 GB SSD. Version of PostgresSQL is 9.3.
I tried to VACUUM ANALYZE tables and that helped, but it is still too slow.
I read about upgrading RAM/CPU, tuning database in postgresql.conf, creating separated tablespace for big table, etc. But I am not sure what is the best approach for handling big tables with million rows.
With current trend my table will grow to 20 milion rows in next 12 months, so I need durable solution.
Any advice how to improve speed of UPDATE query on big tables are helpful.
NULL.Trigger updatemv: time=727.372 calls=1You have triggers on this table, or other tables with FKs that refer to it?