question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Tests with exceptions marks as broken

See original GitHub issue

I’m submitting a …

  • bug report
  • feature request
  • support request => Please do not submit support request here, see note at the top of this template.

What is the current behavior?

If test encountered any kind of exception, allure plugin marks him as broken.

If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem

Test example:

def test_stuff():
    raise Exception('whoa!')

JSON:

{"name": "test_stuff", "status": "broken", "statusDetails": {"message": "Exception: whoa!", "trace": "test_4_stuff.py:14: in test_stuff\n    raise Exception('whoa!')\nE   Exception: whoa!"}}

What is the expected behavior?

If there is some exception in test, In <...>-result.json this test will have "status": "failed".
But if this exception was in, for example, fixture, there’s definitely some weird things that needs to be marked as broken.

What is the motivation / use case for changing the behavior?

I think having exceptions is normal for Python. If in my test goes something wrong, it’ll rather raise exception instead of returning None…

Please tell us about your environment:

  • Test framework: pytest@3.6.0
  • Allure adaptor: allure-pytest@2.3.3b1
  • Python: 3.6.5

Other information

…but I’m not sure is it necessary to mark all tests that have exceptions as failed.
For example, if my test encountered some MyApiException, it’s safe to assume that test is not broken, he’s just failed. But if there is some serious stuff like requests.exceptions.ConnectionError, there’s something wrong, because my test assumed that connection will be established and I could make request at least. IMHO, there could be some argument parameter like --allure-exception=BaseException, and if test was failed because of exception inherited of BaseException, test will be marked as failed. Otherwise test is really broken.

Issue Analytics

  • State:closed
  • Created 5 years ago
  • Comments:5 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
Sup3rGeocommented, Jun 15, 2018

My opinion is that AssertionErrors should make the tests failed and any other exception should make it broken.

Then if you are expecting a custom behaviour in your system, just catch MyApiException and fail it manually (e.g. raise an AssertionError). Then any other types of exceptions would normally make it broken.

You could even very easily implement a pytest plugin in your conftest (https://docs.pytest.org/en/latest/reference.html#_pytest.hookspec.pytest_runtest_protocol) to implement a context manager to do this automatically for all your tests, something like (untested pseudocode):

# conftest.py


class myapicatcher(object):
    def __enter__(...):
        # ...
    def __exit__(...):
        # check here if it is MyApiException and raise AssertionError, otherwise just reraise


def pytest_runtest_protocol(...):
    with myapicatcher():
        yield
0reactions
kam1shcommented, Sep 22, 2018

@Sup3rGeo

My opinion is that AssertionErrors should make the tests failed and any other exception should make it broken.

Sorry for replying too late. But behaviour I described in the ticket is the behaviour of pytest itself. Here’s error in the fixture:

=============================================================================================== test session starts ================================================================================================
platform linux -- Python 3.6.5, pytest-3.6.0, py-1.5.2, pluggy-0.6.0 -- /usr/bin/python3
cachedir: .pytest_cache
benchmark: 3.1.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /home/igor/<...>/tests, inifile: pytest.ini
plugins: allure-pytest-2.5.1, benchmark-3.1.1
collected 35 items / 34 deselected                                                                                                                                                                                 

frontend/test_3_permissions.py::test_folder_permissions[localhost-chrome] ERROR                                                                                                                              [100%]

====================================================================================================== ERRORS ======================================================================================================
___________________________________________________________________________ ERROR at setup of test_folder_permissions[localhost-chrome] ____________________________________________________________________________
conftest.py:79: in cleanup
    u_sess = api.auth.Session(creds[0])
E   TypeError: __init__() should return None, not 'Session'
====================================================================================== 34 deselected, 1 error in 2.59 seconds ======================================================================================

And here’s test failed because of the exception within:

=============================================================================================== test session starts ================================================================================================
platform linux -- Python 3.6.5, pytest-3.6.0, py-1.5.2, pluggy-0.6.0 -- /usr/bin/python3
cachedir: .pytest_cache
benchmark: 3.1.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /home/igor/<...>/tests, inifile: pytest.ini
plugins: allure-pytest-2.5.1, benchmark-3.1.1
collected 35 items / 34 deselected                                                                                                                                                                                 

frontend/test_3_permissions.py::test_folder_permissions[localhost-chrome] FAILED                                                                                                                             [100%]

===================================================================================================== FAILURES =====================================================================================================
____________________________________________________________________________________ test_folder_permissions[localhost-chrome] _____________________________________________________________________________________
frontend/test_3_permissions.py:51: in test_folder_permissions
    <...>
<...>: in request_backend:
    f'{req_type.upper()} {uri} - HTTP {resp.status_code}')
E   <...>.exceptions.ApiException: POST <...> - HTTP 403
===================================================================================== 1 failed, 34 deselected in 3.06 seconds ======================================================================================

Yes, of course, I can use many ways of pytest hooks as a workaround for that: wrap tests in context managers with __enter__ and __exit__, edit allure JSON in the end of the tests, or I can rewrite tests to call pytest.fail instead of raising exceptions, (but then I couldn’t handle them with with pytest.raises(Exception):), etc…

Anyway, this issue is not major and just makes report kinda harder to analyze. With all profit of Allure it’s really not the problem. I just don’t understand why behaviour of the allure-python so differs from the behaviour of the pytest.

Read more comments on GitHub >

github_iconTop Results From Across the Web

python - PyTest with Allure and Selenium how to prevent ...
Allure will mark the test as a fail only when there is an Assertion exception. For all other exceptions, it will mark the...
Read more >
What's the strategy for marking a broken test that you expect to ...
I have a function that should be throwing an exception, but currently it's not. I want to indicate that the not-throwing behavior is...
Read more >
Exception breakpoints to stop on failure in tests : PY-9848
When I run unittests I usually get a one or more failed tests. In a second step I want to select one of...
Read more >
How to use skip and xfail to deal with tests that cannot succeed
When a test passes despite being expected to fail (marked with pytest.mark.xfail ), it's an xpass and will be reported in the test...
Read more >
Exception testing via JUnit @Test annotation should be avoided
When testing exception via @Test annotation, having additional assertions inside that test method can be problematic because any code after the raised ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found