0

DATABASE

I have a normalized Postgres 9.1 database and in it I have written some functions. One function in particular "fn_SuperQuery"(param,param, ...)" returns SET OF RECORD and should be thought of as a view (that accepts parameters). This function has lots of overhead because it actually creates several temporary tables while calculating its own results in order to gain performance with large data sets.

On a side note, I used to use WITH (cte's) exclusively for this query, but I needed the ability to add indexes on some columns for more efficient joins.

PHP

I use PHP strictly to connect to the database, run a query, and return the results as JSON. Each query starts with a connection string and then finishes with a call to pg_close.

FRONTEND

I am using jQuery's .ajax function to call the PHP file and accept the results.


My problem is this:

"fn_SuperQuery"(param,param, ...)" is actually the foundation for several other queries. There are some parts of this application that need to run several queries at once to generate all the necessary information for the end user. Many of these queries rely on the output of "fn_SuperQuery"(param,param, ...)" The overhead in running this query is pretty steep, and the fact that it would return the same data if given the same parameters makes me think that it's dumb to make the user wait for it to run twice.

What I want to do is return the results of "fn_SuperQuery"(param,param, ...)" into a temporary table, then run the other queries that require its data, then discard the temporary table.

I understand that PostgreSQL ... requires each session to issue its own CREATE TEMPORARY TABLE command for each temporary table to be used. If I could get two PHP files to connect to the same database session then they should both be able to see the temporary table.

Any idea on how to do this? ... or maybe a different approach I have yet to consider?

2 Answers 2

1

May be better will be using normal tables? It will be no much difference. You can speed it up by using unlogged tables.

In 9.3 probably better choice would be using materialized views.

Sign up to request clarification or add additional context in comments.

2 Comments

Putting it in a "normal" table would be denormalization, and not an option. I'll look into the materialized view; however, I'm limited to 9.1 for the time being.
I meant create new table, use it, drop it. like temporary table.
0

Temporary tables are session-private. If you want to share across different sessions, use normal tables (probably unlogged).

If you are worried about denormalization, the first thing I would look at doing is just storing these temporary normal tables ;-) in a separate schema. This allows you to keep a the denormalized (and working set data) separate for analysis and such and avoids polluting the rest of you dataset with the denormalized tables.

Alternatively you could look at other means short of denormalization. For example if data isn't going to change after a while you could put summary entries periodically for unchangeable data. This is not a denormalization since it allows you to purge old detail records if you need to down the line while continuing to have certain forms of reporting open.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.