2

I have million of rows in pg_largeobject_metadata table I want to delete. What I have tried so far is :

  • A simple select lo_unlink(oid) works fine
  • A perform lo_unlink(oid) in a loop of 10000 rows will also work fine
  • So when I delete recursively multiple rows i get this error. I cannot increase max_locks_per_transaction because it is managed by AWS.

ERROR: out of shared memory HINT: You might need to increase max_locks_per_transaction. CONTEXT: SQL statement "SELECT lo_unlink(c_row.oid)" PL/pgSQL function inline_code_block line 21 at PERFORM SQL state: 53200

Here is the program I tried to write but I still get the Out of shared memory ERROR.

DO $proc$
DECLARE
v_fetch     bigInt;
v_offset    bigInt;
nbRows      bigInt;
c_row       record;
c_rows      CURSOR(p_offset bigInt, p_fetch bigInt) FOR SELECT oid FROM pg_largeobject_metadata WHERE oid BETWEEN 1910001 AND 2900000 OFFSET p_offset ROWS FETCH NEXT p_fetch ROWS ONLY;

BEGIN
v_offset    := 0;
v_fetch     := 100;
select count(*) into nbRows FROM pg_largeobject_metadata WHERE oid BETWEEN 1910001 AND 2900000;
RAISE NOTICE 'End loop nbrows = %', nbRows;
LOOP                                        -- Loop the different cursors 
    RAISE NOTICE 'offseter = %', v_offset;          
    OPEN c_rows(v_offset, v_fetch);
    LOOP                                    -- Loop through the cursor results
        FETCH c_rows INTO c_row;
        EXIT WHEN NOT FOUND;
        perform lo_unlink(c_row.oid);
    END LOOP;
    CLOSE c_rows;
    EXIT WHEN  v_offset > nbRows;
    v_offset := v_offset + v_fetch;         -- The next 10000 rows
END LOOP;
END;
$proc$;

I am using Pg 9.5 Can anyone has faced this issue and could help please?

1 Answer 1

2

Each lo_unlink() grabs a lock on the object it deletes. These locks are freed only at the end of the transaction, and they are capped by max_locks_per_transaction * (max_connections + max_prepared_transactions) (see Lock Management). By default max_locks_per_transaction is 64, and cranking it up by several order of magnitudes is not a good solution.

The typical solution is to move the outer LOOP from your DO block into your client-side code, and commit the transaction at each iteration (so each transaction removes 10000 large objects and commits).

Starting with PostgreSQL version 11, a COMMIT inside the DO block would be possible, just like transaction control in procedures is possible.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.