56

Unittest presents only total time spent on running all tests but does not present time spent on each test separately.

How to add timing of each test when using unittest?

9 Answers 9

60

I suppose that it's not possible for now: https://github.com/python/cpython/issues/48330.

But you can do something like this:

import unittest
import time

class SomeTest(unittest.TestCase):
    def setUp(self):
        self.startTime = time.time()
    
    def tearDown(self):
        t = time.time() - self.startTime
        print('%s: %.3f' % (self.id(), t))
        
    def testOne(self):
        time.sleep(1)
        self.assertEqual(int('42'), 42)
    
    def testTwo(self):
        time.sleep(2)
        self.assertEqual(str(42), '42')

if __name__ == '__main__':
    suite = unittest.TestLoader().loadTestsFromTestCase(SomeTest)
    unittest.TextTestRunner(verbosity=0).run(suite)

Result:

__main__.SomeTest.testOne: 1.001
__main__.SomeTest.testTwo: 2.002
----------------------------------------------------------------------
Ran 2 tests in 3.003s

OK
Sign up to request clarification or add additional context in comments.

7 Comments

I guess it would be good to have this defined in some base class so that one could easily mix it in.
Definitelly. This is only example of one possibility.
thanks for the answer. Can you explain why do you have sleep method in the starting of each test case ?
can you please also explain why its incremental in each test case ?
|
17

You can use pytest with --durations=0 and it will give you execution time for each test

Comments

11

Here is variation of script from horejsek's answer. It will monkey-patch django TestCase so that every TestCase will give its total running time.

You can place this sript in the root package's __init__.py, where your settings.py lives. After that you can run tests with ./mange.py test -s

from django import test
import time


@classmethod
def setUpClass(cls):
    cls.startTime = time.time()


@classmethod
def tearDownClass(cls):
    print "\n%s.%s: %.3f" % (cls.__module__, cls.__name__, time.time() - cls.startTime)


test.TestCase.setUpClass = setUpClass
test.TestCase.tearDownClass = tearDownClass

Comments

7

Nose tests with the pinnochio extension has a stopwatch option which will give you this, if nose is an option for you.

It also has a ton of other useful features and plugins to make using unittest nicer.

1 Comment

Nose is not actively maintained and recommends to migrate to pytest
6

PyCharm CE (free) provides a clean view of unittest durations. It also helpfully aggregates by directory structure and file and allows you to sort by duration:

enter image description here

And as @horejsek mentioned, there's an issue for unittest to add duration measurement with an open PR: https://github.com/python/cpython/pull/12271.

Comments

5

You can use django-slowtests, which provides output like this:

$ python manage.py test
Creating test database for alias 'default'...
..........
----------------------------------------------------------------------
Ran 10 tests in 0.413s

OK
Destroying test database for alias 'default'...

Ten slowest tests:
0.3597s test_detail_view_with_a_future_poll (polls.tests.PollIndexDetailTests)
0.0284s test_detail_view_with_a_past_poll (polls.tests.PollIndexDetailTests)
0.0068s test_index_view_with_a_future_poll (polls.tests.PollViewTests)
0.0047s test_index_view_with_a_past_poll (polls.tests.PollViewTests)
0.0045s test_index_view_with_two_past_polls (polls.tests.PollViewTests)
0.0041s test_index_view_with_future_poll_and_past_poll (polls.tests.PollViewTests)
0.0036s test_index_view_with_no_polls (polls.tests.PollViewTests)
0.0003s test_was_published_recently_with_future_poll (polls.tests.PollMethodTests)
0.0002s test_was_published_recently_with_recent_poll (polls.tests.PollMethodTests)
0.0002s test_was_published_recently_with_old_poll (polls.tests.PollMethodTests)

If you look at django_slowtests/test_runner.py you could also adapt the technique yourself.

Comments

5

Solution with command-line only:

1/ install nose (popular alternative test-runner) and extension pinocchio

$ pip install nose pinocchio

2/ run tests with time recording (times are saved in file .nose-stopwatch-times)

$ nosetests --with-stopwatch

3/ display tests names sorted by decreasing time:

$ python -c "import pickle,operator,signal; signal.signal(signal.SIGPIPE, signal.SIG_DFL); print '\n'.join(['%.03fs: %s'%(v[1],v[0]) for v in sorted(pickle.load(open('.nose-stopwatch-times','r')).items(), key=operator.itemgetter(1), reverse=True)])" | less

3 Comments

Nose is passe for a few years now. Everybody uses py.test and rightly so.
I think this prints ms when it should print s
Note pinnochio typo: should be pinocchio
3

unittest now supports this natively (to the millisecond) with the --durations 0 option.

$ python -m unittest Lib.test.test_ftplib --durations 0
..........................................s.............................................
Slowest test durations
----------------------------------------------------------------------
0.000s     test__all__ (Lib.test.test_ftplib.MiscTestCase)
0.002s     test_pwd (Lib.test.test_ftplib.TestFTPClass)
....
0.140s     test_storbinary_rest (Lib.test.test_ftplib.TestTLS_FTPClassMixin)
0.148s     test_data_connection (Lib.test.test_ftplib.TestTLS_FTPClass)
0.154s     test_storlines (Lib.test.test_ftplib.TestTLS_FTPClassMixin)
0.175s     test_retrbinary_rest (Lib.test.test_ftplib.TestTLS_FTPClassMixin)
0.204s     test_with_statement (Lib.test.test_ftplib.TestFTPClass)
0.208s     test_with_statement (Lib.test.test_ftplib.TestTLS_FTPClassMixin)
0.880s     test_mlsd (Lib.test.test_ftplib.TestFTPClass)
1.281s     test_mlsd (Lib.test.test_ftplib.TestTLS_FTPClassMixin)

----------------------------------------------------------------------
Ran 88 tests in 5.396s

OK (skipped=1)

Comments

3

I subscribe to all of other users answers, but they mostly have one little issue.

It is actually more accurate to use time.perf_counter() instead of time.time()

def perf_time():
    start = time.perf_counter()
    time.sleep(1)
    end = time.perf_counter()
    print(end - start)

def time_time():
    start = time.time()
    time.sleep(1)
    end = time.time()
    print(end - start)


>>> time_time()
1.0010485649108887
>>> perf_time()
1.0010385409987066

Difference here is quite small, but sometimes that can really matter.

I can also share a nice decorator for this topic:

def time_func(func):
    def wrapper(*args, **kwargs):
        start = time.perf_counter()
        result = func(*args, **kwargs)
        end = time.perf_counter()
        print(f"{func.__name__}: {end - start:.3}s")
        return result
    return wrapper


class TestCase(unittest.TestCase):

    @time_func
    def test_case(self):

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.