7

I have a table in a postgres database that's being used in a testing environment where we need to add and drop several columns at a time. The problem is that postgres has a maximum of 1600 columns, and this count includes dropped columns. My table will never have 1600 total `un-dropped' columns, but over time it accumulates to more than the 1600 with drops.

I've tried using VACUUM and VACUUM FULL, and I've tried recasting an existing column as its own type (ALTER TABLE table ALTER COLUMN anycol TYPE anytype) to cause postgres to scan all of the columns and clean up the memory from dropped columns, but none of these reset postgres' `column number.'

I know that this could be solved by copying the entire table, but that has its own issues, and is in a separate question.

Do you know of a way to make postgres forget it had dropped columns?

I know that postgres was not designed for applications like this, but I'm not going to get into why we chose to implement it this way. If you have an alternate tool to use, I'd be interested to hear about it, but I'd still like to find a solution to this.

2
  • You said you've tried VACUUM on the table. What about VACUUM FULL? Or perhaps, ALTER TABLE table ALTER COLUMN anycol TYPE anytype;, changing a column to the same type it already has. Commented Jul 7, 2011 at 4:34
  • As you already found out there are at least two ways1) : CREATE TABLE new AS select * FROM the_table; rename things; The othe one is pg_dump - t thetable; drop the table; recreate the table from the pg_dump results. In BOTH cases you'll have to reconstruct the FK constraints (which is a bit simpler in the second case) Commented Mar 29, 2016 at 16:55

2 Answers 2

6

This is not possible, other than by recreating the table, as you already found out.

Otherwise the database system would have to keep track somehow when the storage used by a dropped column has been cleared everywhere, and then renumber the attributes. That would be incredibly expensive and complicated.

Sign up to request clarification or add additional context in comments.

2 Comments

That's good to know. Now I just need to figure out how to effectively copy the table
To me, this looks like a job for VACUUM FULL (including ANALYZE?), it rewrites the table anyway and needs a table lock as well. An update of pg_class whouldn't be too expensive, compared to all other work it has to do.
0

The systemtable pg_attribute still shows old (deleted) columns. I don't know why, but it looks like a bug to me.

SELECT 
  relnatts, 
  attname, 
  attisdropped 
FROM 
  pg_class 
    JOIN pg_attribute att ON attrelid = pg_class.oid 
WHERE 
  relname = 'your_table_name';

Could you send a bugreport, including a simple example, to [email protected] or http://www.postgresql.org/support/submitbug ?

2 Comments

If memory serves, it's actually there for transaction-aware alter table statements. But yeah, it should drop the row eventually. :-)
Its not a bug I think PG stores that data to not reuse same attrnum, etc else it has to rewrite the file after a alter.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.