2

I tried searching for it but couldn't find out

What is the best way to copy data from Redshift to Postgresql Database ?

using Talend job/any other tool/code ,etc

anyhow i want to transfer data from Redshift to PostgreSQL database also,you can use any third party database tool if it has similar kind of functionality.

Also,as far as I know,we can do so using AWS Data Migration Service,but not sure our source db and destination db matches that criteria or not

Can anyone please suggest something better ?

3
  • Please note that we don't build solutions for you or perform research on your behalf. Please update your question with something you have actually tried along with the specific issues or errors you are getting. Commented Jun 10, 2019 at 9:18
  • @I.TDelinquent i changed it Commented Jun 10, 2019 at 9:38
  • AWS DMS does not include the option of Redshift as a source. Commented Jun 10, 2019 at 10:27

2 Answers 2

7

The way I do it is with a Postgres Foreign Data Wrapper and dblink,

This way, the redshift table is available directly within Postgres.

Follow the instructions here to set it up https://aws.amazon.com/blogs/big-data/join-amazon-redshift-and-amazon-rds-postgresql-with-dblink/

The important part of that link is this code:

CREATE EXTENSION postgres_fdw;
CREATE EXTENSION dblink;
CREATE SERVER foreign_server
        FOREIGN DATA WRAPPER postgres_fdw
        OPTIONS (host '<amazon_redshift _ip>', port '<port>', dbname '<database_name>', sslmode 'require');
CREATE USER MAPPING FOR <rds_postgresql_username>
        SERVER foreign_server
        OPTIONS (user '<amazon_redshift_username>', password '<password>');

For my use case I then set up a postgres materialised view with indexes based upon that.

create materialized view if not exists your_new_view as
SELECT some,
       columns,
       etc
   FROM dblink('foreign_server'::text, '
<the redshift sql>
'::text) t1(some bigint, columns bigint, etc character varying(50));

create unique index if not exists index1
    on your_new_view (some);

create index if not exists index2
    on your_new_view (columns);

Then on a regular basis I run (on postgres)

REFRESH MATERIALIZED VIEW your_new_view;

or

REFRESH MATERIALIZED VIEW CONCURRENTLY your_new_view;
Sign up to request clarification or add additional context in comments.

2 Comments

but i have to do these for all schema and tables
Correct, for each that you want. depending on your use case you may prefer a different solution if you can find one. I do not know of any (DMS does not work). Perhaps consider WHY you are transferring the data and add that info to your question.
1

In the past, I managed to transfer data from one PostgreSQL database to another by doing a pg_dump and piping the output as an SQL command to the second instance.

Amazon Redshift is based on PostgreSQL, so this method should work, too.

You can control whether pg_dump should include the DDL to create tables, or whether it should just load the data (--data-only).

See: PostgreSQL: Documentation: 8.0: pg_dump

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.