I am using SQLAlchemy in Python3 to store pandas dataframes to PostreSQL table. Things work until 78M rows to store, things work with 20M rows but
Got 75032111 rows.
Total time taken 11222.68 s.
Finished at 2018-05-04 06:07:34.
Killed
where the storing gets killed. I use the SQLAlechemy command df.to_sql(dbName, engine).
Is there some limit to store data to PSQL database with SQLAlchemy in Python? What is the preferred way to store big tables, some sync command to continue storing if things get intercepted because of large size?
dmesgand see if that is exactly the case. Will be a very apparent spam that says it killed Python.chunksize=100000, dunno about heat problem, have to monitor it. Thank you for helping to figure it out +1