1

(Postgres 8.3)

I'm working with a DB table X 100+ columns wide (which I can't change sadly) many of which get updated constantly and very frequently by normal business process.

I have a requirement to update table Y based on updates to a particular column foo in X updated by unusual business process. However, because of the very high number of updates against X simply applying a trigger which inspects X.foo to decide whether to update Y is considered unacceptable.

Table Y is not the end of the line either, there is a chain of ancestors a few deep, all of which need to bubble up to the root.

The only solutions I can think of are:

  • breaking X into multiple tables (not allowed to)
  • explicitly making the updates to Y (and Z and others) as part of the business logic for updating X but this is going to have a big footprint and leaves a lot of room for somebody getting it wrong or missing it when they have to implement the same in another process. And this is clearly just not good design (which I'm trying to gradually fix where I can).

Does anyone know a way to limit trigger execution by column or any other alternative? Triggers on views? Other voodoo?

3 Answers 3

2

You might be able to do something with rules, but has has been previously said, triggers should probably be "good enough". But if you are trying to solve a management problem rather than a technical one, rules might help you. They'll apply much earlier during execition. Beware of some pitfalls with them typically with sequences and such.

Sign up to request clarification or add additional context in comments.

1 Comment

Went with rules in the end, yet to benchmark, but as it is indeed a management problem having an alternative solution was enough for now. Future viewers looking for an answer: hope you have PG9!
2

Unfortunately until version 9.0 is released (which includes both column triggers and a WHEN clause for triggers) you'll have to resort to the second solution.

2 Comments

Thanks for this, but simply as a matter of confusion this looks like 8.5 not 9.0?
Right now it's 8.5 alpha but will be released as 9.0.
0

Why a standard trigger is unacceptable? Running a function, which first checks if NEW.column_name=OLD.column_name, and just returns if it is the same, is cheap. You can fire hundreds of thousands of them in a second. Your system probably can not handle more than several hundred transactions per second — 3 orders of magnitude less.

Conditional after, deferred triggers in 9.0 would be faster, but only about 2 times faster than ordinary trigger. See a relevant post in Depesz's blog. You can run some benchmarks in Postgres 9 development version.

2 Comments

Thanks for the blog link. I strongly suspect this is a case of mindless optimisation, but nonetheless I am required to do this. Triggers have been deemed unworthy by people with more authority than me. :/ (fwiw we're looking at a few thousand updates a second).
Maybe if you construct a benchmark with a "special new technique using special kind of triggers which are specially optimized for this kind of situation". With a trigger like "begin; if OLD.colname=NEW.colname then return NEW; select expensive_function(); end;" and some very smart sounding (and default ;-)) modifiers like "VOLATILE STRICT SECURITY INVOKER".

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.