0

Although I've been using Python for a number of years now, I realised that working predominantly on personal projects, I never needed to do Unit testing before, so apologies for the obvious questions or wrong assumptions I might make. My goal is to understand how I can make tests and possibly combine everything with the GitHub workflow to create some automation. I've seen Failures/Errors (which are conceptually different) thrown locally are not treated differently once online. But before I go, I have some doubts that I want to clarify.

From reading online, my initial understanding seems to be that a test should always SUCCEED, even if it contains errors or failure. But if it succeeds, how can then I record a failure or an error? So I'm tempted to say I'm capturing this in the wrong way? I appreciate that in an Agile environment, some would like to say it's a controlled process, and errors can be intercepted while looking into the code. But I'm not sure this is the best approach.
And this leads me to the second question.

Say I have a function accepting dates, and I know that it cannot accept anything else than that.

  1. Would it make sense to do a test to say pass in strings (and get a failure)?
  2. Or should I test only for the expected circumstances?

Say case 1) is a best practice; what should I do in the context of running these tests? Should I let the test fail and get a long list of errors? Or should I decorate functions with a @pytest.mark.xfail() (a sort of Soft fail, where I can use a try ... catch)?

And last question (for now): would an xfail decorator let the workflow automation consider the test as "passed". Probably not, but at this stage, I've so much confusion in my head that any clarity from experienced users could help.

Thanks for your patience in reading.

3
  • First, I'm not clear where you got the notion that a test shall succeed, even if it contains errors - I would say that this is a misconception, or I didn't understand it. Your second question is also not completely clear to me: you should test possible inputs to your function, "possible" depending on your context. And xfailed tests are tests that are expected to fail, so they will not generate an error if they fail (they will just be ignored in this case). Commented Oct 23, 2021 at 16:44
  • Thanks @MrBeanBremen the first part ... a test should always succeed was mostly from some reading of people working in Agile teams that would expect test to throw errors in the log but SUCCEED and let all the other tests to continue?.<br>Second question: what if the signature is not clear or there is no documentation? Aren't you expected to test everything and the impossible? Are xfailed tests executed at all? As far as I could see as soon the decorator is used, the test is complitely ignored. Am I wrong? Commented Oct 24, 2021 at 15:26
  • I put this into answer to avoid long comments... Commented Oct 24, 2021 at 16:47

1 Answer 1

1

The question is a bit fuzzy, but I will have a shot.

  1. The notion that tests should always succeed even if they have errors is probably a misunderstanding. Failing tests are errors and should be shown as such (with the exception of tests known to fail, but that is a special case, see below). From the comment I guess what was actually meant was that other tests shall continue to run, even if one test failed - that makes certainly sense, especially in CI tests, where you want to get the whole picture.

  2. If you have a function accepting dates and nothing else, it shall be tested that it indeed only accepts dates, and raises an exception or something in the case an invalid date is given. What I meant in the comment is if your software ensures that only a date can be passed to that function, and this is also ensured via tests, it would not be needed to test this again, but in general - yes, this should be tested.

So, to give a few examples: if your function is specified to raise an exception on invalid input, this has to be tested using something like pytest.raises - it would fail, if no exception is raised. If your function shall handle invalid dates by logging an error, the test shall verify that the error is logged. If an invalid input should just be ignored, the test shall ensure that no exception is raised and the state does not change.

  1. For xfail, I just refer you to the pytest documentation, where this is described nicely:

An xfail means that you expect a test to fail for some reason. A common example is a test for a feature not yet implemented, or a bug not yet fixed. When a test passes despite being expected to fail (marked with pytest.mark.xfail), it’s an xpass and will be reported in the test summary.

So a passing xfail test will be shown as passed indeed. You can easily test this yourself:

import pytest

@pytest.mark.xfail
def test_fails():
    assert False

@pytest.mark.xfail
def test_succeeds():
    assert True

gives something like:

============================= test session starts =============================
collecting ... collected 2 items

test_xfail.py::test_fails 
test_xfail.py::test_succeeds 

======================== 1 xfailed, 1 xpassed in 0.35s ========================

and the test is considered passed (e.g. has the exit code 0).

Sign up to request clarification or add additional context in comments.

4 Comments

Thanks again for your time @MrBean Bremen. Say point 2) the date test. Say there is no raise exception code in the main function due to the coder not having done all the necessary homework. In passing a string to a datetime object you will get an exception. Would you test a call with string inputs, catching the exception in a try except block and logging out an error while letting the test pass, or would you raise yourself an exception and let the test failing?
I added more explanation to the answer, please check.
Thanks. And all makes sense, and whilst I continued my documentation process I arrived to the same conclusions. My only doubt now is: An invalid input like a string on a function that expect dates is throwing an exception irrespective of the coder having put efforts to code. So would that test make sense?
Yes, generally that still makes sense. Iif raising a specific exception is what is expected, this shall be tested, less it will be broken by some change for example in the library that actually raises it.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.