I have a postgresql operational DB with data partitioned per day and a postgresql data warehouse DB. In order to copy the data quickly from the operational DB to the DWH I would like to copy the tables as fast and with least of resources used. Since the tables are partitioned by day, I understand that each partition is a table as itself. Is that means I can somehow copy the data files between the machines and create the tables in the DWH with those data files? What is the best practice in that case?
EDIT: I will answer all questions asked in here: 1. I'm building an ETL. First step of ETL is to copy the data with less influence on the operational DB. 2. I would want to replicate the data if this won't slow the operational DB writings. 3. A bit more data, The operational DB is not in my responsepbility but the main concern is the write time on the that DB. It writes about 500 Million rows a day where there are hours that are more loaded but there aren't hours with no writings at all. 4. I came across with few tools/ways - Replication, pg_dump. But I couldn't find something that compare the tools to know when to use what and to understand what is fit to my case.