My problem
Consider a table t with many frequent updates from users, from which only the last few are relevant.
In order to keep the table size reasonable, whenever a new row is inserted old rows from the same user_id are deleted. In order to keep an archive, the row is also written to t_history.
Both t and t_history have the same schema, in which id is a bigserial with a primary key constraint.
Implementation
Stored procedure
CREATE FUNCTION update_t_history()
RETURNS trigger
AS
$$
declare
BEGIN
-- Insert the row to the t_history table. `id` is autoincremented
INSERT INTO t_history (a, b, c, ...)
VALUES (NEW.a, NEW.b, NEW.c, ...);
-- Delete old rows from the t table, keep the newest 10
DELETE FROM t WHERE id IN (
SELECT id FROM t
WHERE user_id = NEW.user_id
ORDER BY id DESC
OFFSET 9);
RETURN NEW;
END;
$$
LANGUAGE plpgsql;
Corresponding insertion trigger:
CREATE TRIGGER t_insertion_trigger
AFTER INSERT ON t
FOR EACH ROW
EXECUTE PROCEDURE update_t_history();
The error
The trigger works well, but when I run a few dozen insertions in a single transaction, I get the following error:
BEGIN
ERROR: duplicate key value violates unique constraint "t_history_pkey"
DETAIL: Key (id)=(196) already exists.
Updates
- The
idfield in both tables (from\d+ t):id|bigint|not null default nextval('t_id_seq'::regclass)"t_pkey" PRIMARY KEY, btree (id)
- PostgreSQL version is 9.3.
Any idea why the stored procedure breaks the primary key constraint in transactions?
inserts in one transaction, or a single multi-valued insert statement?t_history.idautoincremented? Best provide the table definition you get with\d tblin psql. And asre you sure you are not copyingt.idin theINSERTstatement?