1

I have 3 queries that are running very slow. All of the are doing the same in general : select from a view in oracle (viw oracle_fdw extension). All the views look like that : select /parallel (table_Name,4)/ column1,column2,replace(column 3,' ',null),...,replace(...),column8 from table_name.

*I have more than 40 columns for every table so I just mentioned the format of the query.

the selects i run in postgresql look like that:

select * from oracle_table1_view;
  • table1 size is 10G with 100,000,000 records.

  • table2 size is 1.3G with 6,000,000 records.

  • table3 size is 8G with 75,000,000 records.

All this happens as part of a big function that gets data from the oracle db. Before Importing the data to the local tables in postgresql I delete the indexes and the constraints of the local tables and after the import of the data I create them.

Some info about my server :

  • I have 5 GB RAM on server and 4 are free.

  • I have 2 cpus

Some info about my Postgresql instance:

Currently I have only 1 db on the instance.

shared_buffers = 1000MB
effective_cache_size = 2GB
autovacuum = on
work_mem = 4MB

Moreover, I have a lot of selects * from foreign_Table. All of them taking some time but those 3 are taking too much. Please help improve performance of those 3 and if you can of all my selects.

7
  • Could you post an extract of your PostgreSQL server log file ? It might be usefull to see if you're using a lot of temporary files for example... Commented Jul 27, 2017 at 13:02
  • Will it be helpfull If I will put all the content of the log after I run the query ? If yes : 2017-07-27 16:05:13 IDT u postdbSTATEMENT: select * from foreign_table; I prefer not to upload the whole server log. Can you guide me to search for a specific info ? Commented Jul 27, 2017 at 13:07
  • I understand. First logging settings: I think you need to put checkpoint logging on, all temporary files (size min 0), lock_wait on. Then I'd like to now if you have errors or warnings in the log file when you run the query (such as checkpoints performing to frequently) and if (and then how many and which size) you created temporary files. Commented Jul 27, 2017 at 13:21
  • Temporary files | Size of temporary files -----------------+------------------------- 0 | 0 0 | 0 0 | 0 99 | 51190833152 17 | 3451628 Commented Jul 27, 2017 at 13:23
  • 2017-07-27 16:31:12 IDT u LOG: parameter "log_connections" changed to "on" 2017-07-27 16:31:12 IDT u LOG: parameter "log_lock_waits" changed to "on" 2017-07-27 16:31:12 IDT u LOG: parameter "log_statement" changed to "all" 2017-07-27 16:31:12 IDT u LOG: parameter "log_temp_files" changed to "0" 2017-07-27 16:31:14 IDT u [unknown]LOG: connection received: host=[local] 2017-07-27 16:32:01 IDT u postdbLOG: statement: select * from foreign_table; Commented Jul 27, 2017 at 13:33

1 Answer 1

2

Do the queries run fast when you execute them with sqlplus?

If not, you have to solve the problem on the Orace side.

To see the Oracle execution plan used by oracle_fdw, run

EXPLAIN (VERBOSE) SELECT * FROM oracle_table1_view;

Check if that matches the plan when you run from sqlplus. If not, try to spot the difference and figure out why.

If the plan looks the same, but the execution time is different, it could be that you select some LOB columns. Row prefetching does not work if such columns are involved, so there will be one round trip from PostgreSQL to Oracle for each selected row, which can make things really slow.

Sign up to request clarification or add additional context in comments.

10 Comments

The queries take time in the oracle side also but they arent killed. When I run those selects in psql they end and I see Killed the prompt.
What is "Killed the prompt"? Can you explain in more detail? Who kills what?
server:[~] : psql -d mydb -U user psql (9.5.7) Type "help" for help. mydb=> select * from remote_Table; Killed
I bet that is the OOM killer on Linux that kills psql because your system is running out of memory. Check the kernel log to confirm. The problem is psql, which stores the entire query result in memory and gags. Try \set FETCH_COUNT 1000 or similar in psql.
This is drifting off... It looks like you want to create local tables by selecting from the foreign tables. It does not matter if you run ANALYZE before or after you create indexes and constraints, unless there is an index based on an expression rather than a column. The only difference between ANALYZE and VACUUM (ANALYZE) for a newly created table is that the latter will set hint bits for all table rows and thus might speed up the first SELECT.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.