3

I've met some problem with running tests using FastAPI+SQLAlchemy and PostgreSQL, which leads to lots of errors (however, it works well on SQLite). I've created a repo with MVP app and Pytest on Docker Compose testing.

The basic error is sqlalchemy.exc.InterfaceError('cannot perform operation: another operation is in progress'). This may be related to the app/DB initialization, though I checked that all the operations get performed sequentially. Also I tried to use single instance of TestClient for the all tests, but got no better results. I hope to find a solution, a correct way for testing such apps 🙏

Here are the most important parts of the code:

app.py:

app = FastAPI()
some_items = dict()

@app.on_event("startup")
async def startup():
    await create_database()
    # Extract some data from env, local files, or S3
    some_items["pi"] = 3.1415926535
    some_items["eu"] = 2.7182818284

@app.post("/{name}")
async def create_obj(name: str, request: Request):
    data = await request.json()
    if data.get("code") in some_items:
        data["value"] = some_items[data["code"]]
        async with async_session() as session:
            async with session.begin():
                await create_object(session, name, data)
        return JSONResponse(status_code=200, content=data)
    else:
        return JSONResponse(status_code=404, content={})

@app.get("/{name}")
async def get_connected_register(name: str):
    async with async_session() as session:
        async with session.begin():
            objects = await get_objects(session, name)
    result = []
    for obj in objects:
        result.append({
            "id": obj.id, "name": obj.name, **obj.data,
        })
    return result

tests.py:

@pytest.fixture(scope="module")
def event_loop():
    loop = asyncio.get_event_loop()
    yield loop
    loop.close()

@pytest_asyncio.fixture(scope="module")
@pytest.mark.asyncio
async def get_db():
    await delete_database()
    await create_database()

@pytest.mark.parametrize("test_case", test_cases_post)
def test_post(get_db, test_case):
    with TestClient(app)() as client:
        response = client.post(f"/{test_case['name']}", json=test_case["data"])
        assert response.status_code == test_case["res"]

@pytest.mark.parametrize("test_case", test_cases_get)
def test_get(get_db, test_case):
    with TestClient(app)() as client:
        response = client.get(f"/{test_case['name']}")
        assert len(response.json()) == test_case["count"]

db.py:

DATABASE_URL = environ.get("DATABASE_URL", "sqlite+aiosqlite:///./test.db")
engine = create_async_engine(DATABASE_URL, future=True, echo=True)
async_session = sessionmaker(engine, expire_on_commit=False, class_=AsyncSession)
Base = declarative_base()

async def delete_database():
    async with engine.begin() as conn:
        await conn.run_sync(Base.metadata.drop_all)

async def create_database():
    async with engine.begin() as conn:
        await conn.run_sync(Base.metadata.create_all)


class Model(Base):
    __tablename__ = "smth"
    id = Column(Integer, primary_key=True)
    name = Column(String, nullable=False)
    data = Column(JSON, nullable=False)
    idx_main = Index("name", "id")

async def create_object(db: Session, name: str, data: dict):
    connection = Model(name=name, data=data)
    db.add(connection)
    await db.flush()

async def get_objects(db: Session, name: str):
    raw_q = select(Model) \
        .where(Model.name == name) \
        .order_by(Model.id)
    q = await db.execute(raw_q)
    return q.scalars().all()
8

2 Answers 2

1

At the moment the testing code is quite coupled, so the test suite seems to work as follows:

  • the database is created once for all tests
  • the first set of tests runs and populates the database
  • the second set of tests runs (and will only succeed if the database is fully populated)

This has value as an end-to-end test, but I think it would work better if the whole thing were placed in a single test function.

As far as unit testing goes, it is a bit problematic. I'm not sure whether pytest-asyncio makes guarantees about test running order (there are pytest plugins that exist solely to make tests run in a deterministic order), and certainly the principle is that unit tests should be independent of each other.

The testing is coupled in another important way too - the database I/O code and the application logic are being tested simultaneously.

A practice that FastAPI encourages is to make use of dependency injection in your routes:

from fastapi import Depends, FastAPI, Request
...
def get_sessionmaker() -> Callable:
    # this is a bit baroque, but x = Depends(y) assigns x = y()
    # so that's why it's here
    return async_session

@app.post("/{name}")
async def create_obj(name: str, request: Request, get_session = Depends(get_sessionmaker)):
    data = await request.json()
    if data.get("code") in some_items:
        data["value"] = some_items[data["code"]]
        async with get_session() as session:
            async with session.begin():
                await create_object(session, name, data)
        return JSONResponse(status_code=200, content=data)
    else:
        return JSONResponse(status_code=404, content={})

When it comes to testing, FastAPI then allows you to swap out your real dependencies so that you can e.g. mock the database and test the application logic in isolation from database I/O:

from app import app, get_sessionmaker
from mocks import mock_sessionmaker
...
client = TestClient(app)
...
async def override_sessionmaker():
    return mock_sessionmaker

app.dependency_overrides[get_sessionmaker] = override_sessionmaker
# now we can run some tests

This will mean that when you run your tests, whatever you put in mocks.mock_sessionmaker will give you the get_session function in your tests, rather than get_sessionmaker. We could have our mock_sessionmaker return a function called get_mock_session.

In other words, rather than with async_session() as session:, in the tests we'd have with get_mock_session() as session:.

Unfortunately this get_mock_session has to return something a little complicated (let's call it mock_session), because the application code then does an async with session.begin().

I'd be tempted to refactor the application code for simplicity, but if not then it will have to not throw errors when you call .begin, .add, and .flush on it, in this example, and those methods have to be async. But they don't have to do anything, so it's not too bad...

The FastAPI docs have an alternative example of databases + dependencies that does leave the code a little coupled, but uses SQLite strictly for the purpose of unit tests, leaving you free to do something different for an end-to-end test and in the application itself.

Sign up to request clarification or add additional context in comments.

1 Comment

Thanks for your answer! Yeah, I should write more independent tests here 😅 However, the original question was about problem with Postgres usage, so I cannot accept the answer as a solution, but I like it and mark as a useful one 👍
0

Personally, I use the following approach to have isolated tests — having a clean database for each test in an async FastAPI app followed by session dependency injection:

# client.py
from sqlalchemy.orm import sessionmaker
from fastapi.testclient import TestClient
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession, AsyncEngine

from module import models
from module.main import app
from module.config import get_db

DATABASE_URL = "sqlite+aiosqlite:///:memory:"
engine: AsyncEngine = create_async_engine(DATABASE_URL)
async_session = sessionmaker(engine, expire_on_commit=False, class_=AsyncSession)

async def create_tables():
    async with engine.begin() as conn:
        await conn.run_sync(models.Base.metadata.create_all)

async def drop_tables():
    async with engine.begin() as conn:
        await conn.run_sync(models.Base.metadata.drop_all)

async def override_get_db() -> AsyncSession:
    async with async_session() as session:
        yield session

app.dependency_overrides[get_db] = override_get_db
client = TestClient(app)
#conftest.py
import pytest_asyncio
from sqlalchemy.ext.asyncio import AsyncSession
from . import client

@pytest_asyncio.fixture()
async def db() -> AsyncSession:
    async with client.async_session() as session:
        await client.create_tables()
        yield session
        await client.drop_tables()
# test_something.py
import pytest
from module import models
from . import client

@pytest.mark.asyncio
async def test_get_something(db):
    model = models.Model("something")
    db.add(model)
    await db.commit()
    response = client.client.get("/api/something")
    assert response.status_code == 200
    assert len(response.json()) == 1

It is worth noting that the driver for the postgresql in the main app is asyncpg on top of sqlalchemy, but I use sqlite in-memory with aiosqlite driver in tests.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.