4

Right now, I have a Python package (let's call it mypackage) with a bunch of tests that I run with pytest. One particular feature can have many possible implementations, so I have used the funcarg mechanism to run these tests with a reference implementation.

# In mypackage/tests/conftest.py
def pytest_funcarg__Feature(request):
    return mypackage.ReferenceImplementation

# In mypackage/tests/test_stuff.py
def test_something(Feature):
    assert Feature(1).works

Now, I am creating a separate Python package with a fancier implementation (fancypackage). Is it possible to run all of the tests in mypackage that contain the Feature funcarg, only with different implementations?

I would like to avoid having to change fancypackage if I add new tests in mypackage, so explicit imports aren't ideal. I know that I can run all of the tests with pytest.main(), but since I have several implementations of my feature, I don't want to call pytest.main() multiple times. Ideally, it would look like something like this:

# In fancypackage/tests/test_impl1.py
def pytest_funcarg__Feature(request):
    return fancypackage.Implementation1
## XXX: Do pytest collection on mypackage.tests, but don't run them

# In fancypackage/tests/test_impl2.py
def pytest_funcarg__Feature(request):
    return fancypackage.Implementation2
## XXX: Do pytest collection on mypackage.tests, but don't run them

Then, when I run pytest in fancypackage, it would collect each of the mypackage.tests tests twice, once for each feature implementation. I have tried doing this with explicit imports, and it seems to work fine, but I don't want to explicitly import everything.

Bonus

An additional nice bonus would be to only collect those tests that contain the Feature funcarg. Is that possible?

Example with unittest

Before switching to py.test, I did this with the standard library's unittest. The function for that is the following:

def mypackage_test_suite(Feature):
    loader = unittest.TestLoader()
    suite = unittest.TestSuite()
    mypackage_tests = loader.discover('mypackage.tests')
    for test in all_testcases(mypackage_tests):
        if hasattr(test, 'Feature'):
            test.Feature = Feature
            suite.addTest(test)
    return suite

def all_testcases(test_suite_or_case):
    try:
        suite = iter(test_suite_or_case)
    except TypeError:
        yield test_suite_or_case
    else:
        for test in suite:
            for subtest in all_testcases(test):
                yield subtest

Obviously things are different now because we're dealing with test functions and classes instead of just classes, but it seems like there should be some equivalent in py.test that builds the test suite and allows you to iterate through it.

2
  • Hi @tbekolay, looking back at this a few years later, are you aware of any good solution to your problem? Commented Oct 14, 2021 at 10:31
  • 1
    We ended up making a custom plugin in the upstream project, then running the tests in the downstream project with --pyargs upstream which, because of things set in the conftest.py, ran the upstream tests with downstream classes. Commented Oct 14, 2021 at 20:14

1 Answer 1

2

You could parameterise your Feature fixture:

@pytest.fixture(params=['ref', 'fancy'])
def Feature(request):
    if request.param == 'ref':
        return mypackage.ReferenceImplementation
    else:
        return fancypackage.Implementation1

Now if you run py.test it will test both.

Selecting tests on the fixture they use is not possible AFAIK, you could probably cobble something together using request.applymarker() and -m. however.

Sign up to request clarification or add additional context in comments.

6 Comments

Ideally, fancypackage would not run the reference implementation tests, but you're right that parametrising the feature could be useful if I can at least load all the tests once from mypackage. I've updated the question with a unittest snippet that does what I'm looking for, in case that's helpful.
a way to tell pytest "give me a collection of all tests starting HERE" and then "add those to my test collection" would be a nice feature. It's some work to work out the exact UI and implementation but certainly feasible, probably by adding a new hook.
I'm happy to help with implementing that functionality @hpk42! I've made an issue to discuss the idea and the UI.
Another possibility which might work right now is to add a command line option to select which fixture and raise pytest.Skip in the fixture if the command line option disables it. That way you can use parametrisation and only execute tests for one fixture.
The linked issue above moved to: github.com/pytest-dev/pytest/issues/421
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.