I am running postgres server on a slowish older computer with a fast SSD disk, 2,4 GHz i5 processor, and 8 GBs of RAM. The computer is not a speed monster but I am surprised by the performance. Creating index in a table with 40,000,000 rows takes roughly half an hour. My settings are the following:
max_connections = 2
shared_buffers = 2GB
effective_cache_size = 6GB
work_mem = 1GB
maintenance_work_mem = 512MB
min_wal_size = 1GB
max_wal_size = 2GB
checkpoint_completion_target = 0.7
wal_buffers = 16MB
default_statistics_target = 100
When I looked at top output it looks like I am CPU bound (100% CPU) although the memory used by the database was around 500 MB and I would expect it to use more.
I am going to create the index only once per table during first data import. Are there any settings I could tweak to speed up this operation?
CREATE INDEXstatement. Does it get faster if you increasemaintenance_work_mem?maintenance_work_mem. The creation is simple,CREATE INDEX index_name on name (varcharField);maintenance_work_mem? Also is the time "normal by any means"? First encounters with progress so it's hard for me to tell.maintenance_work_memand see if that makes it faster. It really seems slow - on our systems we get about ten times that speed. To see where the time is spent, you could profile with perf or OProfile if you are on Linux (requires debugging symbols).