I have the Postgres database. I'm using Flask and SQLAclhemy. Recently, I had a lot of errors "sqlalchemy.exc.TimeoutError: QueuePool limit of size 5 overflow 10 reached". When I see my server is irresponsive I have to reboot it. I need to figure out what I'm doing wrong and how to avoid this issue. I use only 2 approaches in the code:
- Working with models, for example:
users = User.query.order_by(desc(User.created))\
.limit(users_per_page)\
.offset((page_number - 1) * users_per_page)\
.all()
- Doing direct queries on sessions like:
with contextlib.closing(db.session) as session:
data = session.query(task_function, Stat.activity, func.count()).filter(
Stat.created.in_(subq)).group_by(period_function,
Stat.activity_detail).order_by(desc(period_function))
I started seeing the problems when I created and deployed the second piece of code. Doesn't it work work? What else can be done?
Here is how I set up the connection: I have the db URL in my environments, then just create the instance of SQLAlchemy:
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
I saw a lot of suggestions but I can't figure out how to apply them because they all use some "manual" connection like calling create_engine function which I don't use.
Note. My database is hosted on AWS RDS.
UPD. I was able to fix the issue, at least I don't see hanging sessions anymore and the number of connections is fine. I added session.commit() into the "with" block. Also, I closed the session whenever it's possible:
global engine_container
with app.app_context():
if engine_container is None:
engine_container = db.get_engine()
def cleanup_session(session):
"""
This method cleans up the session object and also closes the connection pool using the dispose method.
"""
global engine_container
session.close()
engine_container.dispose()
return
from this post How do I close a Flask-SQLAlchemy connection that I used in a Thread?
enginecontext manager to handle the reuse of connections.