1

I'm trying to figure out the best way to enforce a unique constraint across multiple nullable columns in PostgreSQL.

Considering the following table:

CREATE TABLE test_table
(
  id serial NOT NULL,
  col_a character varying(255),
  col_b character varying(255),
  col_c date,
  col_d integer,
  CONSTRAINT test_table_pkey PRIMARY KEY (id ),
  CONSTRAINT test_table_col_a_col_b_col_c_key UNIQUE (col_a , col_b , col_c )
);

The combination of col_a, col_b, and col_c must be unique but they are also all nullable.

My current solution to enforce the unique constraint is to create 6 partial indexes (seudo code below):

unique(col_a, col_b) where col_c is null
unique(col_a, col_c) where col_b is null
unique(col_b, col_c) where col_a is null
unique(col_a) where col_b is null and col_c is null
unique(col_b) where col_a is null and col_c is null
unique(col_c) where col_a is null and col_b is null

Is this a 'sane' thing to do? Are there any significant performance issues I should be aware of?

1 Answer 1

1

As far as I know this is the only way to do it by declaration (using create table, create unique index etc.). Of course, each index must be updated. This could be a problem, if your table is growing beyond a certain limit.

This might not be applicable in all situations, but to avoid the need for so many indexes, I declare the columns as not null, and I put a logical empty value in it (for example: "Empty", "None" or "1900-01-01"). Of course, later on, either in ad-hoc queries or in an application, you might have to decode it back to a real null.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.