I have a large PostgreSQL table - 2.8 millions rows; 2345 MB in size; 49 columns, mainly short VARCHAR fields, but with a large json field.
It's running on an Ubuntu 12.04 VM with 4GB RAM.
When I try doing a SELECT * against this table, my psql connection is terminated. Looking in the error logs, I just get:
2014-03-19 18:50:53 UTC LOG: could not send data to client: Connection reset by peer
2014-03-19 18:50:53 UTC STATEMENT: select * from all;
2014-03-19 18:50:53 UTC FATAL: connection to client lost
2014-03-19 18:50:53 UTC STATEMENT: select * from all;
Why is this happening? Is there a maximum amount of data that can be transferred or something - and is that configurable in postgres?
Having one large, wide table is dictated by the system we're using (I know it's not an ideal DB structure). Can postgres handle tables of this size, or will we keep having problems?
Thanks for any help, Ben